Document id: 0
Predicted class = unrelated
True class: duplicate
Text with highlighted words
history-substring-search plugin not working
Since last update history-substring-search plugin is not working anymore, providing no error messages.
空history-substring-search doesn't work after update to Ubuntu 12.10 [has workaround]
history-substring-search just doesn't work since I update my distro from 12.04 to 12.10.
I mean that when I type smth, eg `ls` and press up-arrow button, it shows me last history item, not started from `ls`.
My .zshrc
```
$ cat .zshrc | grep -v -E "^# .*"
ZSH=$HOME/.oh-my-zsh
ZSH_THEME="clean" # fletcherm
plugins=(git history-substring-search command-not-found)
source $ZSH/oh-my-zsh.sh
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
```
ZSH version
```
$ zsh --version
zsh 5.0.0 (i686-pc-linux-gnu)
```
I've just done a clean install of ubuntu 12.10 on a new machine and this seems to be happening to me too.
ZSH: `zsh 5.0.0 (x86_64-unknown-linux-gnu)`, OMZ: 22f827e122187032afb1f473d19a4238899e8ecd, [dotfiles](https://github.com/BRMatt/dotfiles/tree/master/zsh)
Debian Wheezy - same issue - all I did was update oh-my-zsh today (and the issue is present on a server I have running Squeeze)
_edit_ ZSH 4.3.10-14 on the Squeeze machine and 4.3.17-1 on Wheezy
I have this issue too, on two different machines, both on ubuntu 12.10 x64. One of them was an upgrade from 12.04, the other one was a clean install. history-substring-search doesn't work on either.
zsh 5.0.0 (x86_64-unknown-linux-gnu)
+1, Ubuntu 12.10, clean install.
It seems a `zsh` bug in Ubuntu - https://bugs.launchpad.net/ubuntu/+source/zsh/+bug/1048212
Fortunately, there is a [workaround](https://bugs.launchpad.net/ubuntu/+source/zsh/+bug/1048212/comments/6).
Put this line to ~/.zshenv:
```
DEBIAN_PREVENT_KEYBOARD_CHANGES=yes
```
Great! It works! Thanks cutalion!
When Oh My ZSH updated on my mac (10.8.2) this started happening to me as well =(
Same issue here with unity 12.10. Even work around didn't work for me.
```
DEBIAN_PREVENT_KEYBOARD_CHANGES=yes
```
does not work for me either :(
System: Ubuntu 12.10
@cutalion's suggestion worked for me.
Any news here? I have the same problem and the workaround does not work for me.
@sotte the default setup started working for me again except on one machine where I had enabled some plugins that weren't enabled on the working ones. I disabled them and it works out of the box again.
Double check which plugins you have enabled and test if any of them is still breaking it.
Here is my .zshrc. No plugin is enabled. The workaround has no effect.
```
ZSH=$HOME/.oh-my-zsh
ZSH_THEME="robbyrussell"
#DEBIAN_PREVENT_KEYBOARD_CHANGES=yes
plugins=(history-substring-search)
source $ZSH/oh-my-zsh.sh
```
hmm.
What version of ZSH do you have?
What commit of Oh-My-Zsh are you at?
_EDIT_ also - did you comment out the workaround because it had no effect or did you add it already commented?
I tried it with and without the workaround. No effect.
omz is the current version: 615e41b0ecdb25acba513fd09619bd56c2eb24eb
zsh 5.0.0 (x86_64-unknown-linux-gnu)
I have no problems with Ubutu 12.04 and zsh 4.3.17 (x86_64-unknown-linux-gnu). The config is the same. The workaround is not activated.
Same problem for me, and setting DEBIAN_PREVENT_KEYBOARD_CHANGES=yes doesn't help.
Strange that this workaround don't work for some people...I have `export DEBIAN_PREVENT_KEYBOARD_CHANGES=yes` in my `.zshrc` and backward search works perfect for me under Ubuntu 12.10.
On 11.02.2013, at 1:48, Claes Mogren notifications@github.com wrote:
| Same problem for me, and setting DEBIAN_PREVENT_KEYBOARD_CHANGES=yes doesn't help.
|
| —
| Reply to this email directly or view it on GitHub..
In case it's helpful. I have Ubuntu 12.10, zsh 5.0.0 (x86_64-unknown-linux-gnu) with the same issue initially.
Putting "export DEBIAN_PREVENT_KEYBOARD_CHANGES=yes" in .zshrc does not fix the issue.
But putting "DEBIAN_PREVENT_KEYBOARD_CHANGES=yes" into .zshenv appears solving the problem for me.
Could be related to the order of setting this variable.
xubuntu 12.10
updating `.zshrc` with `DEBIAN_PREVENT_KEYBOARD_CHANGES=yes` solved the issue
~~Same issue here. Updated to Ubuntu 12.10. Code below doesn't help:~~
Fixed with adding this into `.zshenv` (not `.zshrc`):
```
DEBIAN_PREVENT_KEYBOARD_CHANGES=yes
```
did you try putting it in `.zshrc` and `.zshenv` ?
On 12 March 2013 09:22, Povilas Balzaravičius notifications@github.comwrote:
| Same issue here. Updated to Ubuntu 12.10. Code below doesn't help:
|
| DEBIAN_PREVENT_KEYBOARD_CHANGES=yes
|
| —
| Reply to this email directly or view it on GitHubhttps://github.com/robbyrussell/oh-my-zsh/issues/1433#issuecomment-14761849
| .
Yes, found solution immediately after posted comment. Editing `.zshrc` is not required at all. Thanks!
editing `.zshrc` sometimes it works... but shouldn't!
`.zshenv` is the one to be edited
Worked for my ubuntu
Worked for me THANKS Damn it made my day :D
PS : Linux Mint 14 (based on ubuntu 12.10)
Worked for me
Ubuntu 13.04, zsh 5.0.0-2ubuntu3
@cutalion - Great, it works, many thanks! :)
@robbyrussell - Are you going to make this as part of the installation script? If not, then probably this issue should be closed, as it seems is ubuntu-specific... (or?)
this is something I've suffered with for many months, and just dealt with. Finding this fix just made my month.
Its working for me but i am getting this message whenever I use it
_history-substring-search-end:9: _zsh_highlight: function definition file not found
system: ubuntu 13.04
Document id: 19
Predicted class = unrelated
True class: duplicate
Text with highlighted words
[meta-issue] Protractor should degrade more gracefully for non-angular pages
Big issue for collecting ideas and coordinating the large goal - in general, Protractor should be kinder when confronting a page which does not have Angular available.
We'd like to preserve the property that we always override modules when the user requests it.
Better bootstrap-pausing also falls into this category.
https://github.com/angular/protractor/issues/1742
https://github.com/angular/protractor/issues/2567
空Timeout after $location.url call
I have an issue where when I call `$location.url('/some/thing')` it breaks the tests.
Everything looks good in thebrowser, but the tests never make it past this test.
I have reduced it to this single it statement:
``` js
it('should redirect the user to the default space', function () {
page.usernameInputEl.sendKeys(page.mockUser.email); page.passwordInputEl.sendKeys(page.mockUser.password);
page.loginButtonEl.click();
//The page is correctly redirected
//Test times out
expect(browser.getLocationAbsUrl())
.toBe('/spaces/1');
});
```
Implementation
``` js
User
.login(credentials)
.then(function(){ //This resolves in about 300 - 500 ms
$location.url('/spaces/1');
});
```
``` bash
Started
A Jasmine spec timed out. Resetting the WebDriver Control Flow.
F
Failures:
1) The login state with correct password should redirect the user to the default space
Message:
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
```
``` js
console.log(jasmine.DEFAULT_TIMEOUT_INTERVAL);// 10000
```
Is this related to https://github.com/angular/protractor/issues/1797?
I have a similar problem. It seems the tests work correctly in my browser, but all I get is
```
A Jasmine spec timed out. Resetting the WebDriver Control Flow.
F
Failures:
1) foo...
Message:
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Stack:
Error: Timeout -Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
at [object Object]._onTimeout (/Users/foo/Workspace/ws/node_modules/jasmine-core/lib/jasmine-core/jasmine.js:1812:23)
```
I don't know, if it happens because of `$location.url` in my code, too. Maybe hidden in `ui-router`...
I am also using ui-router btw
I am having the exact same issue. When my angular app modifies the location, I get a timeout.
If I increase Jasmine's timeout I get a synchronization timeout:
```
1) Home page view lets you lookup a CID
Message:
Failed: Timed out waiting for Protractor to synchronize with the page after 90 seconds. Please see https://github.com/angular/protractor/blob/master/docs/faq.md. The following tasks were pending:
- $timeout: function o(){var e=n||n.$$destroyed,t=e?[]:r.queue,o=e?null:r.digest;r.queue=[],r.timeout=null,r.digest=!1,t.forEach(function(e){e()}),o||d.$digest()}
Stack:
Error: Failed: Timed out waiting for Protractor to synchronize with the page after 90 seconds. Please see https://github.com/angular/protractor/blob/master/docs/faq.md. The following tasks were pending:
- $timeout: function o(){var e=n||n.$$destroyed,t=e?[]:r.queue,o=e?null:r.digest;r.queue=[],r.timeout=null,r.digest=!1,t.forEach(function(e){e()}),o||d.$digest()}
```
I don't use the $timeout service at all (don't use $interval either) unless angular/angular material is calling it internally
Same here
Any progress on that? I've hit this issue today when I was writing some tests that checked whether the page correctly changes depending on the action.
I'm using ui-router.
I am experiencing jasmine spec time out issue since a long back while I run multiple specs on IE 11 browser. Also, once the spec has jasmine spec time out then most of the specs that follows it have similar jasmine spec timed out issue. I am struggling to fix this since a long back but no luck.
I tried many things to solve it as below but issue still exists:
1. Increased maximumSpecCallbackDepth value to 2000
2. Removed BeforeEach fucntion from all spec files and moved that code to it block
3. Used all local variables declared in it block
4. Used Jasmine instead of Jasmine 2 in conf file
5. Using only one it block per specfile
I also upgradedto protractor 3.3.0 but the issue persists. When I run the single spec file with one it block the spec runs fine but with too many specs (tried with more than 10, 20, 30, 100 specs together), it isgiving these errors. and for every run, it is behaving random results.
I googled this and tried many solutions.
My every spec first logs into the app and then carries out the operations as required. I figured out that most of my spec is giving this error when it navigates to the required page using browser.get()....It logs in to the app correctly but it fails on next browser.get() mainly.
I am not sure that this issue is related with this defect but anyhow I am commenting here just to get some stuff that can help me.
any updates?
@punithj Please open a new issue for your problem (timeouts using multiple specs in IE 11). Also, it would be helpful if you could see if you get the same problem in other browsers.
Document id: 32
Predicted class = unrelated
True class: duplicate
Text with highlighted words
MobileSafariClickEventPlugin requires touch events to be initialized
Event do not trigger when node do not have "cursor: pointer" style on it.
Here you have an example:
http://jsfiddle.net/kb3gN/1345/
There is no reference to `pointer` (not even `cursor`) in the React codebase, so I'm assuming this is some weird iOS bug, React might be able to work around it though.
My guess is that is an issue that has been around for quite a while (interesting timing that I ran into the same thing for a different context this morning):
http://www.quirksmode.org/blog/archives/2010/10/click_event_del_1.html
http://stackoverflow.com/questions/7358781/tapping-on-label-in-mobile-safari
You'll see a bunch of workarounds that add empty handlers when the page loads to address this (`$('label').click(function() {});`)
Doesn't seem like React's problem to fix - either add 'cursor: pointer' or empty event handlers where it causes an issue.
@MattKunze We do try to fix things that can be fixed and it wouldn't surprise me if you can temporarily add a click-handler on touch and solve it like that.
Yes, we have https://github.com/facebook/react/blob/master/src/browser/eventPlugins/MobileSafariClickEventPlugin.js to fix this exact problem but it sounds like perhaps it's not working.
Is that plug-in registered automatically, or do you have to do something to enable it?
I can confirm that the original jsfiddle posted above isn't working for me on iOS
Digging through repository I found event plugin which seems to be related to similar bug:
https://github.com/facebook/react/blob/master/src/browser/eventPlugins/MobileSafariClickEventPlugin.js
But problem seems to be little wider because Chrome on iOS is also affected so this is an mobile webkit issue.
Adding
``` javascript
componentDidMount: function() {
this.getDOMNode().onclick = function() {}
}
```
to the component is a workaround. It seems like the plugin is not working at the moment
空onClick broken on iOS.
iOS Safari really doesn't want you clicking anything that's not an `|a|` tag. This is a known issue: http://stackoverflow.com/questions/5421659/html-label-command-doesnt-work-in-iphone-browser/6472181#6472181
The way you fix this is by putting an empty "onclick" attribute on nodes you wish to emit click events. Yep.
So presumably:
```
div({onClick: function(){alert('clicked');}}, 'click me');
```
should emit:
```
|div onclick|click me|/div|
```
on iOS. Ensuring that the click event is actually reachable from an iOS device.
As the stack overflow link points out, this is also an issue for `|label|` elements associated with `|input|` elements. In order to behave as a clickable label, they must also include an empty "onclick" attribute.
```
label(null, input({type: 'checkbox'}), check me);
|label onclick||input type="checkbox| check me|/label|
```
Is this only an issue with iOS4 and below? Can we just generate markup for every node with `onclick=""` on the affected browsers?
I think it affects modern iOS browsers as well.
style="cursor:pointer" also fixes this :)
Does it only happens for onClick or also for onTouchStart, onDblClick ...?
Does attaching an onClick event listener to the dom node fixes the issue?
##
Christopher "vjeux" Chedeau
Facebook Engineer
http://blog.vjeux.com/
On Jun 27, 2013, at 10:45 PM, Lee Byron notifications@github.com wrote:
| style="cursor:pointer" also fixes this :)
|
| —
| Reply to this email directly or view it on GitHub.
`* {cursor: pointer;}` is the least offensive way to do this, IMO
I think this is fixed now, right @zpao @yungsters?
Document id: 1
Predicted class = unrelated
True class: right
Text with highlighted words
Duplicate identifier errors when using @types inside a Windows junction
**TypeScript Version:** 2.0.3
**Code**
- Install any type declaration from `@types` on npm. (For this example I am using `@types/react`, but this is also reproducible with other declaration files: `@types/angular`, `@types/jquery`, `@types/lodash`, etc.)
- Create a junction to the source folder.
- Then, navigate into the junction (instead of the real folder).
- Attempt to compile the following file:
``` ts
import * as React from 'react';
```
A minimal reproducing repo with complete steps to reproduce is available here: https://github.com/smrq/tsc-junction-repro
**Expected behavior:**
Compiles without errors.
**Actual behavior:**
Compiles with the following errors:
``` sh
C:/Code/tsc-junction-repro/node_modules/@types/react/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
node_modules/@types/react/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
```
空Typescript typings not found
pnpm v0.39.0
node v6.5.0
OS Windows 7
### Code to reproduce the issue:
```
mkdir typings-test
cd typings-test
pnpm install typescript@1.8.10 mocha @types/mocha
mkdir test
cd ..
```
Create a `test.ts` file with the following content:
```
describe('Test', () =| {
describe('run', () =| {
it('should run', () =| { console.log('i run');
});
});
});
```
Then run:
```
node_modules/.bin/tsc test/test.ts
node_modules/.bin/mocha
```
### Expected behavior:
- test.ts get compiled into test.js
- mocha runs and passed
### Actual behavior:
- test.ts fails compilation with error 'Cannot find name describe'
- mocha does not find any test files (look for *.js)
I suggest we could copy the @types packages into node_modules instead of linking them. Seems to solve the issue as commented here: https://github.com/rstacruz/pnpm/issues/394#issuecomment-251905995
Document id: 34
Predicted class = right
True class: right
Text with highlighted words
Change event fires extra times before IME composition ends
### Extra details
* Similar discussion with extra details and reproducing analysis: https://github.com/facebook/react/issues/8683
* Previous attempt to fix it: https://github.com/facebook/react/pull/8438 (includes some unit tests, but sufficient to be confident in the fix)
------
### Original Issue
When I was trying this [example](https://jsfiddle.net/reactjs/n47gckhr/light/) from https://facebook.github.io/react/blog/2013/11/05/thinking-in-react.html, any Chinese characters inputted by Chinese pinyin input method would fire too many renders like:

Actually I would expect those not to fire before I confirm the Chinese character.
Then I tried another kind of input method - wubi input method, I got this:
It's weird too. So I did a test [in jQuery](http://jsbin.com/yepogahobo/1/edit?html,js,console,output):

Only after I press the space bar to confirm the character, the `keyup` event would fire.
I know it might be different between the implementation of jQuery `keyup` and react `onChange` , but I would expect the way how jQuery `keyup` handles Chinese characters instead of react's `onChange`.
cc @salier :) – What should we do here?
I think we should not fire `onChange` until the IME string is committed.
One way to handle this in `ChangeEventPlugin` would be to ignore all `input` events between `compositionstart` and `compositionend`, then use the `input` event immediately following `compositionend`.
I did some quick testing on OSX Chrome and Firefox with Simplified Pinyin and 2-Set Korean, and the event order and data seem correct enough. (I predict that we'll have problems with IE Korean, but we may get lucky.)
I think we may continue to see issues with alternative input methods like the GoogleInput Tools extension, but there may be workarounds for that.
空Chinese/Japanese characters not supported
To reproduce:
Switch language to Katakana (Japanese) and type "ka".
Output: "k" (Doesn't input the a and caret jumps back before "k")
Expected output: カ (Should combine "ka" to this)
Update:
- Seems to be connected to all accents as well
- Works "Single line input" / "Adv. options" - demo but not in the "Multiple trigger patterns" - demo
indeed, @chrassendk. `é`, for instance, does not work 😕
Document id: 48
Predicted class = unrelated
True class: right
Text with highlighted words
Duplicate type declarations with npm link
Using TypeScript 1.7.3.
Suppose I have the below npm packages.
The declaration files are generated by TypeScript compiler, and referred to from the other packages by means of the way described [here](https://github.com/Microsoft/TypeScript/wiki/Typings-for-npm-packages).
### package-a
ts src:
``` ts
export default class ClassA {
private foo: string;
bar: number;
}
```
ts declaration:
``` ts
declare class ClassA {
private foo;
bar: number;
}
export default ClassA;
```
### package-b (depends on package-a):
ts src:
``` ts
import ClassA from 'package-a';
namespace ClassAFactory {
export function create(): ClassA {
return new ClassA();
}
}
export default ClassAFactory;
```
ts declaration:
``` ts
import ClassA from 'package-a';
declare namespace ClassAFactory {
function create(): ClassA;
}
export default ClassAFactory;
```
### package-c (depends on package-a and package-b):
ts src:
``` ts
import ClassA from 'package-a';
import ClassAFactory from 'package-b';
let classA: ClassA;
classA = ClassAFactory.create(); // error!!
```
The last line causes anerror during compilation:
```
error TS2322: Type 'ClassA' is not assignable to type 'ClassA'.
Types have separate declarations of a private property 'foo'.
```
When I remove the line `private foo;` from the declaration of package-a, TypeScript does not emit any error.
However this workaround is a bitpainful.
I understand that exposing private properties to declaration is by design (https://github.com/Microsoft/TypeScript/issues/1532).
I think TypeScript should ignore private properties when compiling variable assignment.
Or is there any better workaround for this?
There's only one root declaration of `ClassA` here, so this error shouldn't occur.
Well, sorry I found that this is related to `npm link`.
When I use `npm link`, packages are installed as below, as it simply creates symbolic links.
```
package-c
|
-- node_modules
|
-- package-a
| |
| -- index.d.ts
| |
| ...
|
-- package-b
|
-- index.d.ts
|
-- node_modules
| |
| -- package-a
| |
| -- index.d.ts
| |
| ...
|
...
```
As shown, it looks like there are two different declaration files for package-a.
If I install packages normally by using `npm install`, this does not happen because the declaration of package-a is not included in package-b in this case.
I hope there would be some solution for this anyway, but it might be difficult and low priority.
I ended up not using `npm link`, and this does not matter any more for me.
Fair enough, but someone else might :wink:
there are actually two files on disk with two declarations of ClassA. so the error is correct. but we need to consider node modules when we compare these types. this issue has been reported before in https://github.com/Microsoft/TypeScript/issues/4800, for Enums we changed the rule to a semi-nominal check. possibly do the same for classes.
+1 on this with TS 1.7.5 with all relevant packages NPM-linked. I tried to construct a testcase that exhibits the issue but could not. No matter what I tried, TS was fine with the scenario I see failing with TS2345 in my application, and as far as I could tell, all copies of the problematic .d.ts file were symlinks to the same file, so there should not have been differing declarations within the type. It would be nice however if the error emitted by Typescript referenced the files which declared the two incompatibletypes, as that might shed light on something I'm not considering. Right now it says there are two definitions but does nothing to help the developerpinpoint the issue.
As a workaround you can use `|any|` on the conflicting expression to skip the type check. Obviously this might require you to do another type annotation where you might not have had to before. I hope someone can isolate this issue at some point.
EDIT: made it clear that NPM link is at play in my case
Noticed TS 1.8 is available, upgraded and the issue still exists in that version as well.
Thanks for all the work in analyzing and documenting this issue. We're having the same problem in some of our code bases. We ported some projects to properly use `package.json` dependencies but are now seeing this when using `npm link` during development.
Is there anything I can help to solve this issue?
I'm using [Lerna](https://lernajs.io/) which symlinks packages around and seeing the issue there as well. Typescript version 2.0.3.
Unfortunately Lerna and its symlinks are a hard requirement, so I used this nasty workaround to get this to compile fine while still being properly type-checkable by consumers:
``` ts
export class MyClass {
constructor(foo: Foo) {
(this as any)._foo = foo;
}
get foo() {
return (this as any)._foo as Foo;
}
}
```
The class is very small so it wasn't that arduous, and I don't expect it to change really ever, which is why I consider this an acceptable workaround.
FYI, I've also ended up here as a result of using `npm link` and getting this error. Has anybody found a workaround for this?
@xogeny can you elaborate on how npm link is causing this issue for you?
@mhegazy Well I started getting these errors like the one above (except I was using `Observable` from `rxjs`,i.e., "Type 'Observable' is not assignable to type 'Observable'). This, of course, seemed odd because the two I was referencing `Observable` from exactly the same version of `rxjs` in both modules. But where the types "met", I got an error. I dug around and eventually found this issue where [@kimamula pointed out](https://github.com/Microsoft/TypeScript/issues/6496#issuecomment-171865914) that if you use `npm link`, you'll get this error. I, like others, worked around this (in my case, I created a duplicate interface ofjust the functionality I needed in one module, rather than references `rxjs`).
Doesthat answer your question? I ask because I don't think my case appears any different than the others here so I'm not sure if this helps you.
We have done work in TS2.0 specifically to enable `npm link` scenarios (see https://github.com/Microsoft/TypeScript/pull/8486 and #8346). Do you have a sample where i can look at where npm link is still not working for you?
Huh. I'm running 2.0.3 (I checked). I'll try to create a reproducible case.
By the way, you should follow up on these threads since they imply that this is still an issue as of TS 2.0:
https://github.com/ReactiveX/rxjs/issues/1858
https://github.com/ReactiveX/rxjs/issues/1744
The issue I'm seeing in my Lerna repo is somewhat involved, so I made a stripped-down version of it at https://github.com/seansfkelley/typescript-lerna-webpack-sadness. It might even be webpack/ts-loader's fault, so I've filed https://github.com/TypeStrong/ts-loader/issues/324 over there as well.
I'm using typescript 2.0.3 and I'm seeingthis error with Observable as described above, e.g.
```
Type 'Observable|Location[]|' is not assignable to type 'Observable|Location[]|'. Property
'source' is protected but type 'Observable|T|' is not a class derived from 'Observable|T|'.
```
I am hitting this in a Lerna monorepo package as well. Itfeels like most but not all parts of the type system are using the realpath to uniquely identify files. If you travel downthe branch that is using the symlink path rather than the realpath, you'll end up with identical-but-different types.
This is a pretty brutal problem that will only affect more complex codebases, and it seems impossible to work around without taking drastic measures, so I hope I can convince you all to give it theattention it deserves. 😄
It's most noticeable in cases whereyou havean app that depends on Dependency A, Dependency A depends on Dependency B and vends objects that contain types from Dependency B. The app and Dependency A both `npm link`Dependency B and expect to be able to import types from it and have them describe the same thing.
This results in deep error messages, and I'm on the verge of going through and eliminating all of the `private` and `protected` properties in my libraries because I've already lost so much time to this:
```
TSError: ⨯ Unable to compile TypeScript
tests/helpers/test-application.ts (71,11): Argument of type '{ initializers: Initializer[]; rootPath: string; }' is not assignable to parameter of type 'ConstructorOptions'.
Types of property 'initializers' are incompatible.
Type 'Initializer[]' is not assignable to type 'Initializer[]'. Type 'Application.Initializer' is not assignable to type 'Application.Initializer'.
Types of property 'initialize' are incompatible.
Type '(app:Application) =| void' is not assignable to type '(app: Application) =| void'.
Types of parameters 'app' and 'app' are incompatible.
Type 'Application'is not assignable to type 'Application'.
Types of property 'container' are incompatible.
Type 'Container' is not assignable to type 'Container'.
Types of property 'resolver' are incompatible.
Type 'Resolver' is not assignable to type 'Resolver'.
Types of property 'ui' are incompatible.
Type 'UI' is not assignable to type 'UI'.
Property 'logLevel' is protected but type 'UI' is not a class derived from 'UI'. (2345)
```
Really appreciate you all looking into this; thank you!
@tomdale are you using Webpack, `tsc` or another build tool? My issue seems to only happen when compiled via Webpack (see the linked repo from my [previous comment](https://github.com/Microsoft/TypeScript/issues/6496#issuecomment-254362669)).
@seansfkelley That looks like https://github.com/TypeStrong/ts-node.
That's right, it's using `ts-node` (for the root application). The dependencies, however, are packages compiled with `tsc`.
I just ran into this issue and it is a major problem for us because we use try to split up our back end into many small libraries. During development, we often need to npm link our repos. The specific issue I ran into which prompted me to find this is the use of rxjs Observables and interfaces:
```
// in repo A
export class HttpAdapter {
request(url: string, options?: HttpRequestOptionsArgs): Observable|HttpResponse| {
return Observable.of({});
}
}
// in repo B
export class HttpRequestAdapter implements HttpAdapter {
request(url: string, options?: HttpRequestOptionsArgs): Observable|HttpResponse| {
return Observable.of({});
}
}
```
This works if I don't `npm link`, but when I do, I get:
```
Error:(10, 14) TS2420:Class 'HttpRequestAdapter' incorrectly implements interface 'HttpAdapter'.
Types of property 'request' are incompatible.
Type '(url: string, options?: HttpRequestOptionsArgs) =| Observable|HttpResponse|' is not assignable to type '(url: string, options?: HttpRequestOptionsArgs) =| Observable|HttpResponse|'.
Type 'Observable|HttpResponse|' is not assignable to type 'Observable|HttpResponse|'.
Property 'source' is protected but type 'Observable|T|' is not a class derived from 'Observable|T|'.
```
The only suggestion I can make is to avoid `private`. I don't publish any packages with `private` anymore because of this issue and just use JavaScript-style `_` prefixes instead. I run into it with https://github.com/Microsoft/TypeScript/issues/7755 which is a similar discussion around why `private` kicks into a nominal type system instead of structural, and have hence banned it on my own projects because it's too easy to end up with version differences (E.g. NPM 2 or using `npm link`).
@blakeembrey when you say avoid private, are you suggestion that I can change something in my code? I am assuming that the Observable type definition is the problem, no?
@jeffwhelpley Yes, sorry, your not at fault. It's `Observable`. Unfortunately the avoid `private` advice is very slim and wasn't entirely applicable to you 😄 Maybe you can make an issue on, I'm assuming, `rxjs` about the use of `private` in their public interfaces?
Edit: I mostly commented because I had followed the issue earlier and avoided join with my own experiences, but figured I could write my thoughts down again too instead they're similar to https://github.com/Microsoft/TypeScript/issues/6496#issuecomment-255232592 (where @tomdale suggests eliminating `private` and `protected`, I did the same a while back).
I got the impression from @mhegazy that he felt there was no issue with `npm link`. But it still seems to be plaguing us and others. So I'm not sure where this issue stands? Is it an acknowledged issue with TS 2.0+ or am I just missing a workaround somewhere?!?
I'm getting this same issue and it doesn't appear to be caused by `npm link`. I still get it if I install it using `npm install file.tar.gz`. Here's the error:
```
app/app.component.ts(46,5): error TS2322: Type 'Observable|boolean | Account|' is not assignable to type 'Observable|boolean | Account|'.
Property 'source' is protected but type 'Observable|T|' is not a class derived from 'Observable|T|'.
```
Here's what my `app.component.ts` looks like:
``` ts
export class AppComponent implements OnInit {
private user$: Observable|Account | boolean|;
private loggedIn$: Observable|boolean|; private login: boolean;
private register: boolean;
constructor(public stormpath: Stormpath) {}
ngOnInit() {
this.login = true;
this.register = false;
this.user$ = this.stormpath.user$;
this.loggedIn$ = this.user$.map(user =| !!user);
}
```
It's complaining about the `this.user$` line. `Stormpath` has `user$` defined as the following:
```
@Injectable()
export class Stormpath {
user$: Observable|Account | boolean|;
```
@xogeny Odd, my understanding was the definition identity was tied to file location which would mean they are always going to cause issues using `npm link` (because the `npm link`ed dependency would have it's own dependencies installed). Perhaps definition identity has been changed - using file hashes might be a good workaround in TypeScript. Unfortunately there's just a dozendifferent ways to end up with duplicate modules in JavaScript (`npm install` from GitHub, `npm install`, manual clones, version conflicts can even result in the same version landing in different locations because of how node's module resolution algorithmworks, etc).
@blakeembrey Perhaps. But then what was [this](https://github.com/Microsoft/TypeScript/issues/6496#issuecomment-254336000) about?
Note, I'm not complaining. I'm just trying to figure out if there is some hope of this being resolved or not. It is a serious thorn in our side for [all the reasons](https://github.com/Microsoft/TypeScript/issues/6496#issuecomment-256828329) @jeffwhelpley mentioned.
@xogeny I know,I'm trying too, I'd love to see it resolved correctly 😄 I read the linked issues, but it they are all designed to resolve the realpath of a symlink which implies if you have two (real) files they'll still conflict because they'll resolve to different locations. Which is what happens when you `npm link` from one project into another as both would have their own dependencies that can differ with re-exported symbols from the `npm link`ed package.
Edit: I can confirm, all the issues are because of two files. `npm link` would trigger it because it's simple to have a dependency in a repo that you just linked that is the same dependency as in the project you linked to. A simple repro would be to do an `npm install` of the same dependency at two different levels of an application and watch them error out.

空Typescript@next and npm linked node_modules
|!-- BUGS: Please use this template. --|
|!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --|
|!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --|
I'm this problem on windows 10. I have a linked node_module that was created using `npm link @pocesar/moip2`, Typescript@next is trying to use the typings in the linked `node_modules\@types\bluebird\index.d.ts`
**TypeScript Version:** nightly
The `tsc --listFiles` shows:
```
../node-moip2/node_modules/@types/bluebird/index.d.ts(759,5): error TS2300: Duplicate identifier 'export='.
g:/www/neuro/node_modules/typescript/lib/lib.d.ts
g:/www/neuro/node_modules/@types/node/index.d.ts
g:/www/neuro/node_modules/@types/express-serve-static-core/index.d.ts
g:/www/neuro/node_modules/@types/mime/index.d.ts
g:/www/neuro/node_modules/@types/serve-static/index.d.ts
g:/www/neuro/node_modules/@types/express/index.d.ts
g:/www/neuro/node_modules/@types/body-parser/index.d.ts
g:/www/neuro/node_modules/@types/lodash/index.d.ts
g:/www/neuro/node_modules/@types/lru-cache/index.d.ts
g:/www/neuro/node_modules/@types/bluebird/index.d.ts
g:/www/neuro/src/server/modules/correios.ts
g:/www/node-moip2/node_modules/@types/bluebird/index.d.ts
g:/www/node-moip2/moip.d.ts
```
My local tsconfig.json is:
``` json
{
"compilerOptions": {"module": "commonjs",
"noImplicitAny": true,
"removeComments": false,
"preserveConstEnums": true,
"inlineSourceMap": true,
"outDir": "lib",
"noImplicitReturns": true,
"noImplicitUseStrict": true,
"noImplicitThis": true,
"noUnusedLocals": true,
"allowSyntheticDefaultImports": false,
"allowUnusedLabels": false,
"allowUnreachableCode": false,
"noUnusedParameters": true,
"pretty": true,
"newLine": "LF",
"allowJs": false, "moduleResolution": "node",
"target": "es5",
"declaration": false
},"rootDir": "src/server",
"exclude": [ "lib",
"src/user",
"src/client",
"src/buy",
"src/admin",
"data",
"config",
".vscode",
".tscache",
"node_modules",
"tests",
"conf",
"keys",
".sass-cache",
"views",
"public"
]
}
```
The tsconfig.json in moip2:
``` json
{
"compilerOptions": {
"module": "commonjs",
"noImplicitAny": true,
"removeComments": false,
"preserveConstEnums": true,
"sourceMap": false,
"outDir": ".",
"moduleResolution": "node",
"target": "es2015",
"declaration": true
},
"files": [
"moip.ts"
]
}
```
Do I need to use excludes? Doesn't files and excludes cancel each other?
**Expected behavior:**
Should ignore the node_modules on linked packages as well
**Actual behavior:** The exclude option is being ignored for npm link'd packages
looks like we need to get real path when enumerating the types directories.
This is also an issue on OSX. The `@types` folders from within the _linked libraries_ are included when compiling the top-level package:
```
node_modules/@types/react/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
../my-library/node_modules/@types/react/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
```
I use my own [gulp-npmworkspace](https://www.npmjs.com/package/gulp-npmworkspace) to manage large projects and this is a show-stopper for me. Currently, the only workaround is to use physical paths in the package.json and call `npm install` every time you make a change.
Same root cause as #9771; the path machinery as it stands is fine as long as we don't freak out over importing two files with the same UMD global
Hmm, I'm not so sure the path machinery is fine actually.With this `tsconfig.json`
```
{
"compilerOptions": {
"jsx": "preserve",
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"target": "es2015",
"module": "es2015",
"moduleResolution": "node",
"outDir": ".tmp",
"allowJs": true,
"rootDir": "."
},
"exclude": [
"node_modules",
"gulpfile.js"
]
}
```
I get these errors
```
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/common.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/config.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/config/cli-config.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/config/npm-config.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/config/syslog-config.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/container.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/exception.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/logger.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'.'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/transports.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
error TS6059: File '/home/rasmus/Development/beanloop/shujin/node_modules/winston/lib/winston/transports/transport.js' is not under 'rootDir' '/home/rasmus/Development/beanloop/socialview/data'. 'rootDir' is expected to contain all source files.
```
shujin is an npm linked dependency and for some reason typescript uses its modules even though node_modules is excluded.
Should I open a new issue?
I'm still getting this problem with a node_module that has been linked to the project and has a `@types`. it's breaking "typings" from my typings directory (bunch of duplicate exports)
I think this is going to be fixed in 2.0.1, but that hasn't been released yet?
I'm using `typescript@next` (aka 2.1.0)
You're right @pocesar, I can't get it working either. I'm switching back to using DefinitelyTyped directly. No matter how I try to organise my packages, it doesn't work when I have a dependency symlinked. It still brings in the shared @types/xxx from the dependency and I get the duplicates.
This bug still exists on tsc version 2.0.3. it should be reopened.@RyanCavanaugh this bug still exists for @types sub-submodules in a `npm link` dependency (as per @pocesar )
it happens when your `npm link` dependency re-exports definitions found in `@types`.
for example, in my new version of `xlib` I do `export import lodash = require("lodash");`
that works fine when I `npm install xlib@next` but when i then `npm link xlib` then I get those same `Duplicate identifier 'export='` errors.
the same issue occurs if you have a relative link, such as: `import xlib = require("../../xlib");`
this is a regression from typescript 1.x, where you could have relative links no problem.
Same issue here. Seems like tsc also doesn't respect "typeRoots" in tsconfig.json. Still looks for types outsides of the folders listed there.
@jpzwarte this is a closed issue, marked as "fixed". if you are still running into issues please file a new ticket, include the version you are using, and enough details to allow us to diagnose the issue.
Yes, I got this...though tools are different, it seems the same reason.
We are using `cnpm` which is a modified version of npm that cache packages from outside China (I hate that, but I have no idea, cause that is the fast way to download packages):
But `cnpm` has a different package saving system ( maybe for compatibility ), it downloads packages under `/usr/local/lib/node_modules/.|package name|` ( the dot ) instead of common `/usr/local/lib/node_modules/|package name|`, and then create a link between 2nd and 1st.
Then i also run `npm link |package name|` for my local application. The output would look like this:
```
$ npm link "@types/react"
/Users/yarco/Sites/ladycat_v2/admin/node_modules/@types/react -| /usr/local/lib/node_modules/@types/react -| /usr/local/lib/node_modules/.@types/react_npminstall/node_modules/.@types/react@0.14.44
```
Then when run `tsc`, i got:
```
$ tsc
node_modules/@types/react-dom/index.d.ts(6,21): error TS2300: Duplicate identifier 'ReactDOM'.
node_modules/@types/react/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
../../../../../usr/local/lib/node_modules/.@types/react-dom_npminstall/node_modules/.@types/react-dom@0.14.18/index.d.ts(6,21): error TS2300: Duplicate identifier 'ReactDOM'.
../../../../../usr/local/lib/node_modules/.@types/react-dom_npminstall/node_modules/.@types/react@0.14.44/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
../../../../../usr/local/lib/node_modules/.@types/react_npminstall/node_modules/.@types/react@0.14.44/index.d.ts(7,21): error TS2300: Duplicate identifier 'React'.
```
Any solution/tricks for now?
Update: #9091 also is the same issue.
@yarcowang
on windows, here is a workaround using a batch script with robocopy. I wrote this because the `tsconfig.json` based work arounds don't fix visualstudio unfortunately, as it seems to have another means of scanning solution tsfiles which breaks due to the same/similar issue.
``` batch
:rerunloop
@echo watching for changes to project files.............. (Ctrl-C to cancel)
@rem *******************************
@rem npm link fix : copy code into node_modules of the consuming project: xlib--| blib and slib
@robocopy ..\xlib\src ..\blib\node_modules\xlib\src *.* /MIR /NJH /NJS /NDL /XD .git
@if NOT "%errorlevel%" == "0" (
@rem copy occured, so copy both
@robocopy ..\xlib\dist ..\blib\node_modules\xlib\dist *.* /MIR /NJH /NJS /NDL /XD .git
@robocopy ..\xlib\src ..\slib\node_modules\xlib\src *.* /MIR /NJH /NJS /NDL /XD .git
@robocopy ..\xlib\dist ..\slib\node_modules\xlib\dist *.* /MIR /NJH /NJS /NDL /XD .git@rem set the src dirs readonly
@attrib +R ..\blib\node_modules\xlib\src\* /S /D
@attrib +R ..\slib\node_modules\xlib\src\* /S /D
)
@rem *******************************
@rem another alternative way to fix npm link issues: copy source code to the consuming project and have the consuming project treat it as a native part of it's project
@robocopy .\dtll-interop\src\mirror-source .\dtll-app-browser\src\dtll-interop *.* /MIR /NJH /NJS /NDL
@if NOT "%errorlevel%" == "0" (
@rem copy occured, so copyboth
@robocopy .\dtll-interop\src\mirror-source .\dtll-server-dashboard\src\dtll-interop *.* /MIR /NJH /NJS /NDL
@rem and set reseults readonly
@attrib +R .\dtll-server-dashboard\src\dtll-interop\* /S /D
@attrib +R .\dtll-app-browser\src\dtll-interop\* /S /D
)
@timeout /t 1 /nobreak | NUL
@goto rerunloop
```
@RyanCavanaugh still having this problem on 2.2-dev.20170131, please reopen
```
../cim-service-locator/node_modules/@types/consul/index.d.ts(6,1): message TS4090: Conflicting definitions for 'node' found at 'G:/www/cim-service-locator/node_modules/@types/node/index.d.ts' and 'g:/www/cim-backend/services/node_modules/@types/node/index.d.ts'. Consider installing a specific version of this library to resolve the conflict.
../cim-service-locator/node_modules/@types/lodash/index.d.ts(19211,15): error TS2428: All declarations of 'WeakMap' must have identical type parameters.
../cim-service-locator/node_modules/@types/request/index.d.ts(8,1): message TS4090: Conflicting definitions for 'node' found at 'G:/www/cim-service-locator/node_modules/@types/node/index.d.ts' and 'g:/www/cim-backend/services/node_modules/@types/node/index.d.ts'. Consider installing a specific version of this library to resolve the conflict.
```
unless #6496 is going to be the 'main issue' for this problem (lingering for a lot of time by now), but this shouldn't be tagged as fixed because it isn't
this is still not working!
```bash
ERROR in [at-loader] ../corifeus-web-material/node_modules/@types/hammerjs/index.d.ts:9:5
TS2300: Duplicate identifier 'export='.
ERROR in [at-loader] ../corifeus-web-material/node_modules/@types/hammerjs/index.d.ts:71:6
TS2300: Duplicate identifier 'RecognizerTuple'.
ERROR in [at-loader] ../corifeus-web-material/node_modules/@types/hammerjs/index.d.ts:139:15
TS2300: Duplicate identifier 'HammerInput'.
ERROR in [at-loader] ../corifeus-web-material/node_modules/@types/hammerjs/index.d.ts:217:15
TS2300: Duplicate identifier 'MouseInput'.
```
fyi there is a bug in ```npm``` that is causing issues: https://github.com/npm/npm/issues/10343
it used to work for me, but after switching my npm versions around it's broken again due to the issue I linked above.
who knows when that will get fixed....
whit this settings it works
https://github.com/patrikx3/corifeus-web-pages/blob/master/tsconfig.json
multiple linked projects
@patrikx3 thanks it works, assuming you don't run into the npm bug I mentioned above.
for those looking at Patrik's tsconfig file, the important lines are:
```
"baseUrl": "./",
"paths": {
"*": [
"node_modules/@types/*",
"*"
] }
```
@jasonswearingen this solution doesn't work for me unfortunately. Has anyone found a solution to this problem? It really impedes being able dev npm modules locally.
you can try delete the node_modules sometimes.
not always.
it will not work always with a symlink :)
i use 2 repos. a linked and a non linked.
sometimes i remove the node_modules, sometimes i have to switch to a non link clone.
will not be perfect always.
besides i use yarn, but that not helping anyways ciao!!
Document id: 3
Predicted class = right
True class: left
Text with highlighted words
Webpack config not finding nested node_modules
### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -| please search issues before submitting
- [ ] feature request
```
### Versions.
```
@angular/cli: 1.0.2
node: 6.10.1
os: darwin x64
@angular/common: 4.1.1
@angular/compiler: 4.1.1
@angular/core: 4.1.1
@angular/forms: 4.1.1
@angular/http: 4.1.1
@angular/platform-browser:4.1.1
@angular/platform-browser-dynamic: 4.1.1
@angular/router: 4.1.1
@angular/cli: 1.0.2
@angular/compiler-cli: 4.1.1
```
### Repro steps.1. Clone the repo: https://github.com/jshcrowthe/ng-cli-nested-node-modules.git
1. Enter the repo dir
1. Run `npm install`
1. Run `ng serve`
1. Observe error from `ng serve` command
### The log given by the failure.
```log
ERROR in ./~/firebase/app/shared_promise.js
Module not found: Error: Can't resolve 'promise-polyfill' in '$PATH_TO_REPO/nested-node-module/node_modules/firebase/app'
@ ./~/firebase/app/shared_promise.js 22:35-62
@ ./~/firebase/app/firebase_app.js
@ ./~/firebase/app.js
@ ./src/app/app.component.ts
@ ./src/app/app.module.ts
@ ./src/main.ts
@ multi webpack-dev-server/client?http://localhost:4200 ./src/main.ts
```
### Desired functionality.
Webpack should compile this module and it's nested children correctly
### Mention any other details that might be useful.
n/a
Unfortunately, yes, it currently doesn't work but a solution is in progress.
Note that everything works with `yarn` as it will hoist the dependencies to the top level.
Also, I would highly suggest using https://github.com/angular/angularfire2 which makes integrating firebase into an Angular app quite quick and painless.
@hansl this is the problem we talked about yesterday with Firebase.
Enabling the use of the Node.js module resolution algorithm (a minor code change) will ensure the correct and proper module is used within an app. As witnessed in this issue as well as #3023, this is a real and repeatable concern.
The change will cause issues in two main situations:
* Angular libraries which do not properly use peer dependencies.
* Linked libraries which have their peer dependencies installed via dev dependencies.
The first is essentially a non-issue as the libraries are improperly packaged.
The second is a concern and will cause trouble to those wishing to use linked Angular library packages in development scenarios. However, ensuring an application can resolve and use the proper module (and allow the application to properly function) would generally outweigh the added developer inconvenience for a use case that while not uncommon is not extremely common either.This point, however, is definitely one for discussion.空Error loading npm linked custom library with aot
| Please provide us with the following information:
| ---------------------------------------------------------------
### OS?
| Windows 7, 8 or 10. Linux (which distribution). Mac OSX (Yosemite? El Capitan?)
Mac OSX, El Capitan
### Versions.
| Please run `ng --version`. If there's nothing outputted, please run in a Terminal: `node --version` and paste the result here:
```
angular-cli: 1.0.0-beta.24
node: 6.9.2
os: darwin x64
@angular/common: 2.4.1
@angular/compiler: 2.4.1
@angular/core: 2.4.1
@angular/forms: 2.4.1
@angular/http: 2.4.1
@angular/platform-browser: 2.4.1
@angular/platform-browser-dynamic: 2.4.1
@angular/router: 3.4.1
@angular/compiler-cli: 2.4.1
```
### Repro steps.
| Was this an app that wasn't created using the CLI? What change did you do on your code? etc.
From a really fresh angular cli project.
Only adding in app.module.ts :
```
import { MessageModule } from 'my_lib';
@NgModule({
declarations: [ AppComponent
],
imports: [
BrowserModule, FormsModule,
HttpModule,
MessageModule.forRoot() // |-- include here
],providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }```
Adding one of the component in app.component.html
`|my-message type="error"|This is an error message|/my-message|`
And running
`ng build --aot`
### The log given by the failure.
|Normally this include a stack trace and some more information.
```
chunk {0} main.bundle.js, main.bundle.map (main) 1.48 kB {2} [initial] [rendered]
chunk {1} styles.bundle.css, styles.bundle.map, styles.bundle.map (styles) 1.77 kB {3} [initial] [rendered]
chunk {2} vendor.bundle.js, vendor.bundle.map (vendor) 1.06 MB [initial] [rendered]
chunk {3} inline.bundle.js, inline.bundle.map (inline) 0 bytes [entry] [rendered]
ERROR in Error encountered resolving symbol values statically. Calling function 'makeDecorator', function calls are not supported. Consider replacing the function or lambda with a reference to an exported function, resolving symbol Injectable in /Users/admin/dev/web-library/node_modules/@angular/core/src/di/metadata.d.ts, resolving symbol OpaqueToken in /Users/admin/dev/web-library/node_modules/@angular/core/src/di/opaque_token.d.ts, resolving symbol OpaqueToken in /Users/admin/dev/web-library/node_modules/@angular/core/src/di/opaque_token.d.ts
ERROR in ./src/main.ts
Module not found: Error: Can't resolve './$$_gendir/app/app.module.ngfactory' in '/Users/admin/dev/web-library/ng24/src'
@ ./src/main.ts 4:0-74
@ multi main
ERROR in ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js
Module not found: Error: Can't resolve '/Users/admin/dev/web-library/ng24/src/$$_gendir' in '/Users/admin/dev/web-library/ng24/node_modules/@angular/core/src/linker'
@ ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js 69:15-36 85:15-102
@ ./~/@angular/core/src/linker.js @ ./~/@angular/core/src/core.js
@ ./~/@angular/core/index.js
@ ./src/main.ts
@ multi main
```
### Mention any other details that might be useful.
My message module declare and exports the message component, with inlined css/html.
It's Exported with a forRoot :
```
export class MessageModule {
public static forRoot(): ModuleWithProviders {
return {ngModule: MessageModule, providers: []};
}
}
```
my_lib is generated with ngc, with a .metadata.json near each d.ts.
With following tsconfig parameters :
```
...
"target": "es5",
"module": "es2015",
...
"angularCompilerOptions": {
"skipTemplateCodegen": true
}
```
| ---------------------------------------------------------------
| Thanks! We'll be in touch soon.
Having a very similar issue here with a similar setup. I'd love to see someone respond to this.
Are tree.
```Calling function 'makeDecorator', function calls are not supported. Consider replacing the function or lambda with a reference to an exported function```
Where is this makeDecorator function? is it a part of your project?
I had a similar problem and it was caused by a small projectwhich I imported and didn't appear to be prepared for aot yet. The makeDecorator function wasn't part of the project code either.
I have the same problem getting this error. This error is thrown when I try to import a module coming outside my angular-cli root.
E.g. import {TestModule} from '../../../..|angular-cli root|/../../Private/test-share/src/app/test/test.module';
Doesn't matter if I use the source .ts file or the compiled version. Also happens when linking with yarn link.
As far as I can tell now, there is definitely a problem with imported modulesoutside the projects root. When I use `yarn link`, which creates a symlink, and try to import the module in my project, I get the "Calling function 'makeDecorator'" error. Now if I drag my library into another npm module (e.g. @angular/common) everything works just fine.
This means it is not possible to use self-written libs with angular-cli currently, unless they are alreadypublished somewhere. Hope this gets fixed soon.
@SebastianSchenk check [this](https://github.com/msusur/angular-lib-exp) one as an example, I had the same issue but this example seems currently working fine.
```sh
npm install https://github.com/msusur/angular-lib-exp.git --save
```
```typescript
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
FormsModule,
HttpModule,
AngularLibExpModule.forRoot()
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
```html
|h1|
{{title}}:
{{ 3.23 | roundDown }}
|cm-display [message]="title"||/cm-display|
|/h1|
```
@msusur thank your for the example. Using .forRoot helps as a workaround to avoid the error.
However, it also helped me to find the root of the problem. I just cloned the lib from github and linked it with `yarn link` to my project. After that, I imported the module and added it to the imports array of my AppModule **without** using .forRoot. Everything worked, no errors. After installing the dependencies of the lib, the error was thrown. Now I had to use .forRoot to make to error disappear.
To sum it up. The error occurs only when the lib also contains node_modules. I think it has something to do with @angular/* being a in both node_modules.
@SebastianSchenk I was having a very similar issue with a feature module I am building under it's own dependency. With you're hypothesis, I removed my `node_modules` directory from my feature module before linking it. After that, everything worked like a charm.
I'm getting a bunch of errors in my feature module since none of it's dependencies are present... but that more or less add to an issue with duplication in the `node_modules` folder.
@jppellerin I had the same issue, removing `node_modules` from the feature module fixed it. Thanks for sharing! 👍
It seems like this is a issue caused by Typescript and not by Angular CLI. Typescript has problems resolving dependencies when `yarn link` or `npm link` is used, as you can see here [#11916](https://github.com/Microsoft/TypeScript/issues/11916) and here [#6496](https://github.com/Microsoft/TypeScript/issues/6496). This problem couldbe solved with Typescript 2.1 (read more [here](https://github.com/ReactiveX/rxjs/issues/1858#issuecomment-257008634)).
For now, thanks to this post [#1858](https://github.com/ReactiveX/rxjs/issues/1858) I found a good workaround, which doesn't affect the actual code of the library, as it is the case with an empty `.forRoot()`. Furthermore, you don't have to delete `node_modules` anymore. Just add a `paths` property with all angular dependenciesto the tsconfig file of the Angular CLI project, where you include your library with `yarn link` or `npm link`.
Here is my full tsconfig.json:
```
{
"compilerOptions": {
"baseUrl": "",
"declaration": false,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"lib": ["es6", "dom"],
"mapRoot": "./",
"module": "es6",
"moduleResolution": "node",
"outDir": "../dist/out-tsc",
"sourceMap": true,
"target": "es5",
"typeRoots": [
"../node_modules/@types"
],
"paths": {
"@angular/common": ["../node_modules/@angular/common"],
"@angular/compiler": ["../node_modules/@angular/compiler"],
"@angular/core":["../node_modules/@angular/core"],
"@angular/forms": ["../node_modules/@angular/forms"],
"@angular/platform-browser": ["../node_modules/@angular/platform-browser"],
"@angular/platform-browser-dynamic": ["../node_modules/@angular/platform-browser-dynamic"],
"@angular/router": ["../node_modules/@angular/router"],
"@angular/http": ["../node_modules/@angular/http"]
}
}
}
```
Closing as it has been answered. Great writeup and workaround by @SebastianSchenk !
I edited the title and added the FAQ label to make it easier for people having the same problemto find this.
@SebastianSchenk perhaps worth noting: Your solution (which resolved this issue for me - a million thanks) can be rewritten as:
`paths: {
"@angular/*": ["../node_modules/@angular/*"]
}`
Document id: 13
Predicted class = unrelated
True class: left
Text with highlighted words
Local changes not being picked up when running tests
This is a bit of a strange issue that is hard to explain what is going on but I will try my best.
I was working on a reported issue in one of my addons, the issue was a strange behaviour in the FastBoot render hence the need for a FastBoot testing harness. I set up a test using ember-fastboot-addon-tests to attempt a "red/green" test, making sure that I could capture a failing test beforegetting it working again.
I managed to capture the failing scenario with a test but when I implemented the fix and tried to run `ember fastboot:test` it didn't actually pickup any of my changes. I then ran the dummy app with ember-cli-fastboot and it seemed to have fixed my issue when doing manual testing.
At this point I thought it was a cache issue so I cleared my `dist` and `tmp` folders and ran `ember fastboot:test` again and it still didn't work. Even a fresh build on Travis wasn't working correctly.
I then published the addon as a patch version because I was relatively sure the fix was valid because of my manual testing and then when running the local `ember fastboot:test` again it started working.
My best guess fora mental model of what is going on here is that the `ember fastboot:test` is somehow installing my addon from npm instead of using the local files, which would mean that it would need an npm publish before it would pick up changes that would fix any tests that are broken.
I could probably recreate this issue if you would like a demo, but I'm wondering if this is a known behaviour already before I spend the time on attempting a re-creation.
Let me know if you have any questions 👍
@mansona thanks for reporting! This is strange, and certainly not an expected behavior! 🤔
Your addon should have been symlinked to the temporary app's folder, so it should pick up any changes without requiring you to publish it. This lib uses `ember-cli-addon-tests` under the hood, which symlinks the addon here: https://github.com/tomdale/ember-cli-addon-tests/blob/master/lib/utilities/pristine.js#L237-L267
Not sure what's happening in your case. Maybe you could run it with debugging output enabled:
```bash
DEBUG=ember-cli-addon-tests ember fastboot:test
```
This should output some symlink messages (see the source code).
Btw, you are using this under OSX/Linux or Win? I don't know for sure how the latter behaves, regarding symlinks...
Oh wait a sec, just recalled this one: https://github.com/tomdale/ember-cli-addon-tests/issues/176
Maybe this is what is happening to you? 空Npm (5) install will override the symlinked addon under test
Not a bug here, but in npm itself: https://github.com/npm/npm/issues/17287
Lost quite some time because of this, so mentioning this here in case others get bitten by this. `ember-cli-addon-tests` will symlink the addon under test into `node_modules` of the temporary app, but if you call `npm install` afterwards (e.g. by calling `app.run('npm', 'install')`), npm 5 will remove the symlink and install the addon from the npm registry (which is probably *not* the same code that you are working on). Does not happen with npm 3.
All I have to say is WTF.
Document id: 27
Predicted class = both
True class: left
Text with highlighted words
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1619/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
|*errors.StatusError | 0xc8214e6200|: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server has asked for the client to provide credentials (get replicationControllers rc)",
Reason: "Unauthorized",
Details: {
Name: "rc",
Group: "",
Kind: "replicationControllers",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Unauthorized",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 401,
},
}
the server has asked for the client to provide credentials (get replicationControllers rc)
not to have occurred
```
Previous issues for this test: #27479 #27675
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13095/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jun 28 11:18:10.369: Number of replicas has changed: expected 3, got 4
```
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6070/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jun 29 00:16:22.557: Number of replicas has changed: expected 3, got 4
```
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gke/7573/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jul 5 05:20:39.403: Number of replicas has changed: expected 3, got 4
```
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1681/Failed: [k8s.io] [HPA]Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jul 5 15:13:33.840: Number of replicas has changed: expected 3, got 4
```
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1698/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
|*errors.StatusError |0xc8211d1500|: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server does not allow access to the requested resource (get replicationControllers rc)",
Reason: "Forbidden",
Details: {
Name: "rc",
Group: "", Kind: "replicationControllers", Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-ngocw/replicationcontrollers/rc\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 403,
},
}
the server does not allow access to the requested resource (get replicationControllers rc)
not to have occurred
```
@jszczepkowski @fgrzadkowski @davidopp This test in its various flavors has been rather flaky for the past few months based on how often I see it show up in red/yellow tests. e.g. kubernetes-e2e-gke-test/13349 for another hit. Is there a plan to address in the near term?
@alex-mohr @a-robinson
I was investigating this issue last week. We have two problems here:
- Problems with resource access: duplicate of #28656,
- Lack of HPA stability on GKE: there are no v(4) logs on GKE, I couldn't debug the issue w/o it. There is a google internal bug for increasing the log level: b/29991151.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1868/
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
|*errors.StatusError | 0xc8210ea680|: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server has prevented the request from succeeding (get replicationControllers rc)",
Reason: "InternalError",
Details: {
Name: "rc",
Group: "",
Kind: "replicationControllers", Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-wiun2/replicationcontrollers/rc\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server has prevented the request from succeeding (get replicationControllers rc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:249
```
The last [problem](https://github.com/kubernetes/kubernetes/issues/28097#issuecomment-236316408) (an error on the server has prevented the request from succeeding) is a duplicate of #28656.
空cAdvisor /stats/summary endpoint in kubelet returns incorrect cpu usage numbers
### Environment
Kubernetes version: 1.2.3
Docker version: 1.10.3
3 node(c4.xLarge) cluster on AWS running CoreOS 1010.4.0.
### Issue
After facing an issue with incorrect metrics being reported by heapster kubernetes/heapster#1177 I tried querying the cadvisor /stats/summary endpoint directly to see if that would give me consistent values for node cpu usage.
I have one pod with cpu request=1000m and limit=1000m. In that pod I run a busy loop to consume a 100% of the cpu. This is what top shows on the node.
|img width="648" alt="screen shot 2016-06-10 at 10 07 52 am" src="https://cloud.githubusercontent.com/assets/6660644/15975336/e922fe28-2f01-11e6-9dd9-7482fa52b4e4.png"|
I query the /stats/summary endpoint every 5 seconds, however it seems that the latest timestamps are only updated every 15 seconds or so. Checking the `summary.Node.CPU.UsageNanoCores` value from the summary returned gives me the following output(formatted):
```
TS:2016-06-09T20:05:20-07:00, Percentage:105.539566 Val:1055395658
TS:2016-06-09T20:05:31-07:00, Percentage:107.416097 Val:1074160974
TS:2016-06-09T20:05:44-07:00, Percentage:108.195877 Val:1081958770
TS:2016-06-09T20:05:59-07:00, Percentage:106.670910 Val:1066709101
TS:2016-06-09T20:06:19-07:00, Percentage:14.360576 Val:143605762
TS:2016-06-09T20:06:31-07:00, Percentage:108.000277 Val:1080002769
TS:2016-06-09T20:06:41-07:00, Percentage:108.373232 Val:1083732315
TS:2016-06-09T20:06:56-07:00, Percentage:107.025070 Val:1070250700
TS:2016-06-09T20:07:16-07:00, Percentage:13.004869 Val:130048687
TS:2016-06-09T20:07:31-07:00, Percentage:106.839146 Val:1068391461
TS:2016-06-09T20:07:48-07:00, Percentage:107.614464 Val:1076144640
TS:2016-06-09T20:08:06-07:00, Percentage:4.232330 Val:42323305
TS:2016-06-09T20:08:20-07:00, Percentage:106.009173 Val:1060091732
TS:2016-06-09T20:08:35-07:00, Percentage:108.121440 Val:1081214401
TS:2016-06-09T20:08:50-07:00, Percentage:106.659561 Val:1066595609
TS:2016-06-09T20:09:07-07:00, Percentage:1.724644 Val:17246439
TS:2016-06-09T20:09:19-07:00, Percentage:106.633227 Val:1066332268
TS:2016-06-09T20:09:38-07:00, Percentage:9.938621 Val:99386209
TS:2016-06-09T20:09:53-07:00, Percentage:107.046112 Val:1070461118
TS:2016-06-09T20:10:10-07:00, Percentage:3.373636 Val:33736361
TS:2016-06-09T20:10:25-07:00, Percentage:107.338541 Val:1073385413
TS:2016-06-09T20:10:39-07:00, Percentage:108.575783 Val:1085757834
TS:2016-06-09T20:10:54-07:00, Percentage:107.055382 Val:1070553817
TS:2016-06-09T20:11:13-07:00, Percentage:7.869509 Val:78695088
TS:2016-06-09T20:11:32-07:00, Percentage:11.476262 Val:114762620
TS:2016-06-09T20:11:45-07:00, Percentage:106.928681 Val:1069286811
TS:2016-06-09T20:12:03-07:00, Percentage:3.309632 Val:33096320
TS:2016-06-09T20:12:15-07:00, Percentage:105.832345 Val:1058323450
TS:2016-06-09T20:12:34-07:00, Percentage:5.079409 Val:50794090
TS:2016-06-09T20:12:47-07:00, Percentage:106.305439 Val:1063054389
TS:2016-06-09T20:13:05-07:00, Percentage:3.613690 Val:36136900
TS:2016-06-09T20:13:24-07:00, Percentage:9.785441 Val:97854409
TS:2016-06-09T20:13:43-07:00, Percentage:12.661783 Val:126617830
```
As you can see I'm not getting a steady report of near 100% cpu usage values for UsageNanoCores. Any idea why this might be the case or how I can debug this issue. Also is there any way I can change the resolution of the summary stats to get more fine grained reporting.
/cc @xiang90
/cc @vishh
@xiang90 @vishh is on vacation. I am guessing @timstclair is taking over while he is out?
cc @fgrzadkowski @mwielgus
cc @jszczepkowski
Ok, we received several similar reports over different channel. Looks like there is a regression in the node monitoring pipeline. But @vishh and @timstclair are out this week. The rest of node team will take a look.
A little clarification here:
1) This is not regression from 1.2 release. The issue was reported against 1.2 release.
2) The issue is filed against CoreOS nodes. There is known integration issue between cAdvisor and CoreOS. But we need to verify if it is CoreOS specific.
3) There are many compatability fixes in cAdvisor after 1.2. We need to verify if this particular issue is handled.
@Random-Liu could you please see if we can reproduce the issue on GCE first. Then we can look deeply.
I can reproduce this in my GCE cluster.
The pod spec:
```
apiVersion: v1
kind: Pod
metadata:
name: busyloop
spec:
containers:
- name: busyloop
image: busybox:1.24
resources:
limits:
cpu: "1000m"
requests:
cpu: "1000m"
command:
- "/bin/sh"
- "-c"
- "while true; do let a=a+1; done"
```
The cpu usage shown on the node:

The CPU usage from summary api:
```
"2016-06-15T18:18:50Z"
42617410 |------
"2016-06-15T18:19:08Z"
22084312 |------
"2016-06-15T18:19:20Z"
1041369086
"2016-06-15T18:19:20Z"
1041369086
"2016-06-15T18:19:37Z"
1038933723
"2016-06-15T18:19:54Z"
1036406938
"2016-06-15T18:20:10Z"
1060789362
"2016-06-15T18:20:25Z"
1036855643
"2016-06-15T18:20:36Z"
1035081293
"2016-06-15T18:20:55Z"
37240397 |------
"2016-06-15T18:21:14Z"
63126343 |------
```
I turned up a cluster and checked cAdvisor reports, there is no issue on cAdvisor side. This is a good news. If there is an issue, should be at Kubelet side when generating the summary report.
1. This both happens to node total cpu usage and container cpu usage.
2. This already happened from summary api initially added https://github.com/kubernetes/kubernetes/commit/ba5be34574984b288dfaeaa54a555eca4c6ca710
```
{
"usageNanoCores": 92771898,
"usageCoreNanoSeconds": 706834898989
}
{
"usageNanoCores": 107032388,
"usageCoreNanoSeconds": 727412973043
}
{
"usageNanoCores": 1027715766,
"usageCoreNanoSeconds": 738008086317
}
{
"usageNanoCores": 1031570384,
"usageCoreNanoSeconds": 748728467038
}
{
"usageNanoCores": 1032540287,
"usageCoreNanoSeconds": 766018195725
}
{
"usageNanoCores": 1030987975,
"usageCoreNanoSeconds": 783682285808
}
{
"usageNanoCores": 54850471,
"usageCoreNanoSeconds": 803166613887
}
{
"usageNanoCores": 1033434503,
"usageCoreNanoSeconds": 821532921078
}
{
"usageNanoCores": 51873488,
"usageCoreNanoSeconds": 840957415420
}
{
"usageNanoCores": 1032030277,
"usageCoreNanoSeconds": 853306743742
}
{
"usageNanoCores":1032416963,
"usageCoreNanoSeconds": 868911126177
}
{
"usageNanoCores": 1033925228,
"usageCoreNanoSeconds": 884786175548
}
{
"usageNanoCores": 1052274284,
"usageCoreNanoSeconds": 900235028759
}
{
"usageNanoCores": 1028935005,
"usageCoreNanoSeconds": 918629560058
}
{
"usageNanoCores": 1037630827,
"usageCoreNanoSeconds": 930626761245
}
{
"usageNanoCores": 1033762611,
"usageCoreNanoSeconds": 946582359518
}
{
"usageNanoCores": 1032846972,
"usageCoreNanoSeconds": 960848048815
}
{
"usageNanoCores": 103042617,
"usageCoreNanoSeconds": 981330261506
}
{
"usageNanoCores": 102881754,
"usageCoreNanoSeconds": 1001823960435
}
{
"usageNanoCores": 61051671,
"usageCoreNanoSeconds": 1021430463957
}
```
This is definitely a cadvisor issue. I got the following data with https://github.com/kubernetes/kubernetes/commit/ba5be34574984b288dfaeaa54a555eca4c6ca710:
```
summary 1054030207
cadvisor 1054030207
summary 1054030207
cadvisor 1054030207
summary 1050024459
cadvisor 1050024459
summary 1050024459
cadvisor 1050024459
summary 135275808 |------
cadvisor 135275808 |------
summary 1043402283
cadvisor 1043402283
summary 1043402283
cadvisor 1043402283
summary 16972453 |------
cadvisor 16972453 |------
summary 1031975045
cadvisor 1031975045
summary 1031879659
cadvisor 1031879659
summary 1031879659
cadvisor 1031879659
summary 95699418 |------
cadvisor 95699418 |------
summary 95699418 |------
cadvisor 95699418 |------
summary 1034200592
cadvisor 1034200592
summary 1030551299
cadvisor 1030551299
summary 1030551299
cadvisor 1030551299
summary 1030428344
cadvisor 1030428344
summary 1031580099
cadvisor 1031580099
summary 1031580099
cadvisor 1031580099
summary 1029971315
cadvisor 1029971315
summary 1035317622
cadvisor 1035317622
```
The script I use:
```
#!/bin/bash
while true
do
echo "summary" `curl -s http://localhost:10255/stats/summary | ./jq '.node.cpu.usageNanoCores'`
echo "cadvisor" `curl -s http://localhost:4194/api/v2.1/stats | ./jq '(."/" | reverse)[0].cpu_inst.usage.total'`
sleep 10
done
```
I think I know the root cause based on the data collected by @Random-Liu above, but need to verify.
UsageNanoCores was introduced to record the total CPU usage (sum of all cores) averaged over the sample window, but I couldn't find the code summarizing the usages on cores together. In @Random-Liu's test, I think the node has 2 cores, and the busyloop container is running on two cores. I believe cadvisor does report the usages on two cores, but summary only report the first one here.
In summary, it is a kubelet summary code bug, not cAdvisor issue.
@Random-Liu Since I don't have the test environment ready for this yet, could you please help me quickly validate my theory.
Please update your busyloop container's cpuset.cpus to 0. You can simply modify /sys/fs/cgroup/cpuset/|container-id|/cpuset.cpus from 0-1 to 0. Then run your stats collection script.
@dchen1107 - I sort of doubt that's the issue, since the summary just copies the field from the cAdvisor API: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/stats/summary.go#L289
Without looking too deep (vacation and all), how do the cumulative numbers look (`UsageCoreNanoSeconds`)? If the issue is only with the "instantaneous" stats, I suspect there's a problem in the conversion logic found [here](https://github.com/google/cadvisor/blob/master/info/v2/conversion.go#L187).
@timstclair you should on vacation :-)
Yes, I just saw the code, we simply use Total from cAdvisor API. Also @Random-Liu just mentioned to me that the initial report reporting node usage, not pod usage above.
@timstclair @dchen1107 If we manually calculate with the cumulative number `UsageCoreNanoSeconds`, the result is right!
@Random-Liu found the root cause here https://github.com/google/cadvisor/blob/master/info/v2/conversion.go#L209:
```
convertToRate := func(lastValue, curValue uint64) (uint64, error) {
if curValue | lastValue {
return 0, fmt.Errorf("cumulative stats decrease")
}
valueDelta := curValue - lastValue
return (valueDelta * 1e9) / timeDeltaNs, nil
```
When the valueDelta is too big, times 1e9 makes it overflow.
Can we use this https://golang.org/pkg/math/big/ in above conversion code?
In one of the overflow case in my test, the `valueDelta` is `18949414524`, `timeDeltaNs` is `17978530662`, `(18949414524*10^9) | MAX_INT64=18,446,744,073,709,551,615`.
Some simple ways to solve this:
- Change the value to float64, but we may lose some precision.
- Use https://golang.org/pkg/math/big/, this may make the calculation a little slower. (see https://groups.google.com/forum/#!topic/golang-nuts/BPu-ZNatVhM)
In fact, we usually only need 10-11 digit for cpu usage in nano cores (xx.xxx xxx xxx), float64 has about 15 digit (52 bit) precesion, which should be enough already.
I think we shouldjust change this to `float64`, `/` first, `* 10e9` and change back to `uint64`.
@dchen1107 @timstclair WDYT?
/cc @piosz
Float64 should be more than enough precision for this.
On Jun 15, 2016 20:29, "Lantao Liu" notifications@github.com wrote:
| Some simple ways to solve this:
| - Change the value to float64, but we may lose some precision.
| - Use https://golang.org/pkg/math/big/, this may make the calculation
| a little slower. Which one is acceptable? @piosz
| https://github.com/piosz
|
| —
| You are receiving this because you were mentioned.
| Reply to this email directly, view it on GitHub
| https://github.com/kubernetes/kubernetes/issues/27194#issuecomment-226357950,
| or mute the thread
| https://github.com/notifications/unsubscribe/ADHtaEgO6ZAHsPOcatgMwBjWLqYM_jC7ks5qMJjPgaJpZM4IzO4S
| .
@timstclair Cool, I'll send a PR to fix this.
Great,
When the fix will be available? Will it be backported into k8s 1.2 release?
Thanks!
@stefanodoni Will send itout soon, :)
The fix is here https://github.com/google/cadvisor/pull/1333.
Can anyone has permission review and merge it? @dchen1107 /cc @kubernetes/sig-node
@stefanodoni The fix was merged to cadvisor already, and an open pr #27591 is going to update cadvisor library to Kubernetes.
cc/ @piosz do you think we need to backport this to k8s 1.2 release? That means we have to create a cadvisor cherrypick release for 1.2 release.
I don't think so since we are about to release 1.3 but I'll let you decide. I can imagine that bumping cadvisor right now in release 1.2 might be risky since there was many changes there but @timstclair can analyze the risk better.
Hi,
Any news on that one?
I think this is going to impact pretty much all of k8s deployments out there, if this remains unfixed.
Are there alternative ways to get reliable pod CPU usage metrics within k8s?
@stefanodoni This has been fixed in #27591
Great thanks!
To better state my question: will the change be backported to 1.2 series? I guess most of the people are using k8s 1.2 currently, so the question probably translates to which Heapster release should be used to monitor 1.2 clusters?
The cluster monitoring manifests that come with k8s release currently use heapster 1.0.2 version, which is buggy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.0.2
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
version: v1.0.2
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
version: v1.0.2
template:
metadata:
labels:
k8s-app: heapster
version: v1.0.2
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.0.2
name: heapster
@stefanodoni The bug is in cadvisor side not heapster. This change will be in 1.3 and 1.3 will come out soon.
For 1.2, AFAIK, we don't have plan to backport it for now. @dchen1107 has better ideas about this.
Document id: 7
Predicted class = unrelated
True class: both
Text with highlighted words
When adding a library to the workspace, also update .gitignore
When running `ng g library name_of_library` also add the libraries `node_modules` path to the top level `.gitignore`.
Can you please elaborate a bit more?
You should only have node_modules at the root level of your Angular workspace.
Sure. I recently repackaged `@fireflysemantics/is` using the Angular Package Format. Here's is the gitignore for the project:
https://github.com/fireflysemantics/is/blob/master/.gitignore
Notice:
```
# dependencies
/node_modules
/projects/is/node_modules
```
The reason I need `projects/is/node_modules` is that unless I have this git will see all those files when we run `npm i` for the local library.空Angular CLI and monorepos (like lerna, yarn workspaces)
### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -| please search issues before submitting
- [x] feature request
```
### Versions.
```bash
$ ng --version
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
@angular/cli: 1.3.0-beta.0
node: 6.10.1
os: darwin x64
@angular/animations: error
@angular/common: error
@angular/compiler: error
@angular/core: 4.3.1
@angular/forms: error
@angular/http: error
@angular/platform-browser: error
@angular/platform-browser-dynamic: error
@angular/router: error
@angular/cli: error
@angular/compiler-cli: error
@angular/language-service: error
```
This in itself manifests the error.
### Repro steps.
A full repro can be found at GitHub repo [spektrakel-blog/a-glimpse-at-yarn-workspaces](https://github.com/spektrakel-blog/a-glimpse-at-yarn-workspaces)
Initially, faced the "You seem to not be depending on @angular/core" error.
This goes down to a [sanity check whether Angular is installed as a local dependency](https://github.com/angular/angular-cli/blob/master/packages/@angular/cli/upgrade/version.ts#L85-L89).
However, there are more errors with [yarn workspaces](https://github.com/yarnpkg/yarn/issues/3294), when dependencies from a workspace (or "sub-project", or "sub-package") are installed to the top-most `node_modules` directory of the workspaces root.
Then, it ends up w/ file structure:
Workspace root `package.json`:
```json
"workspaces": [
"packages/*",
"demo"
]
```
Demo workspace `package.json`:
```json
{
"name": "demo",
"version": "0.0.0",
"license": "MIT",
"private": true,
"dependencies": {
"@angular/common": "^4.2.4",
"@angular/core": "^4.2.4",
"@angular/forms": "^4.2.4",
"@angular/http": "^4.2.4"
}
}
```
Now, dependencies are installed to `node_modules` and not to `demo/node_modules`:
```bash
$ ls -a demo/node_modules/
. .. .bin .yarn-integrity
$ ls -a node_modules/@angular
. common forms platform-browser-dynamic
.. compiler http router
animations compiler-cli language-service tsc-wrapped
cli core platform-browser
```
Then, running `ng build` from the `demo` folder errors:
```bash
$ cd demo
$ yarn build
yarn build v0.27.5
$ ng build
You seem to not be depending on "@angular/core". This is an error.
error Command failed with exit code 2.
```
As a workaround, it's possible to symlink "@angular/core":
```bash
$ mkdir -p ./node_modules/@angular
$ ln -sf ../../../node_modules/@angular/core ./node_modules/@angular/core
$ ls -l node_modules/@angular/core
lrwxr-xr-x ... node_modules/@angular/core -| ../../../node_modules/@angular/core
$ cat node_modules/@angular/core/package.json
{
"name": "@angular/core",
"version": "4.3.1",
"description": "Angular - the core framework",
"main": "./bundles/core.umd.js",
...
}
```
But then we only get one error further until:
```bash
$ yarn build
yarn build v0.27.5
$ ng build
Hash: a90d0476b33232a953c1
Time: 53409ms
chunk {0} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 177 kB {3} [initial] [rendered]
chunk {1} styles.bundle.js, styles.bundle.js.map (styles) 10.5 kB {3} [initial] [rendered]
chunk {2} main.bundle.js, main.bundle.js.map (main) 1.89 MB [initial] [rendered]
chunk {3} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
WARNING in ../~/@angular/compiler/@angular/compiler.es5.js
(Emitted value instead of an instance of Error) Cannot find source file 'compiler.es5.ts': Error: Can't resolve './compiler.es5.ts' in '/Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/compiler/@angular'
@ ../~/@angular/platform-browser-dynamic/@angular/platform-browser-dynamic.es5.js 7:0-72
@ ./src/main.ts
@ multi ./src/main.ts
ERROR in Error encountered resolving symbol values statically. Function calls are not supported. Consider replacing thefunction or lambda with a reference to an exported function (position 194:50 in the original .ts file), resolving symbol NgModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/core/core.d.ts, resolving symbol BrowserModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/platform-browser/platform-browser.d.ts, resolving symbol BrowserModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/platform-browser/platform-browser.d.ts
```
Which suggests that Angular CLI isn't really meant to work w/ "sub-projects" like yarn worksapces.
### The log given by the failure.
Seeabove.
### Desired functionality.
Angular CLI and webpack should resolve depndencies from "top-level" `node_modules`.
The sanity check should honor node's module resolution algorithm (looking up recursively up the file tree).
### Mention any other details that might be useful.
Related to #6504
A related change was implemented in #6475 (resolve in all available node_modules), but it seems that it does not address this use case.
At this point, this is me primarily asking:
"How is Angular CLI meant to work with monorepos?"
It the answer is "please use Angular CLI from the monorepo root directory", I can live by that.
I'm not really familiar with the Yarn workspaces, but the CLI currently attributes meaning to the location both both `.angular-cli.json` and `package.json`.
It expects them to be in the same dir, assumes that dir is the project root, and that there's also a `node_modules` there. These requirements are pretty pervasive throughout the CLI.
So no, I don't think the CLI is currently well equipped to work in monorepos.
But that's not because we don't want it to, just because it wasn't ever much of a concern. If there's a reasonable way of making it work I'm all for it really.
I think some of the CLI users are using it with Lerna, but I don't know the details of it.
The important bits of monorepo support is that it's not really specific to any single setup (like Lerna or these Yarn workspaces), and that it doesn't compromise the current setup.
Regarding my original post: Ineed to check whether [`preserveSymlinks`](https://github.com/angular/angular-cli/wiki/angular-cli) offers an easy solution / workaround. #7194 #7081
---
Regarding discussion:
What may happen in monorepos (at least in yarn workspaces) that they install "(workspace-)local" and "(project-)global" dependencies. Example:
```
|- aio
|- node_modules
|- @angular
|- http # |- 4.3.2 | package.json
|- node_modules
|- @angular
|- common # |- 4.3.2
| package.json
|- packages
|- my-lib
|- node_modules
|- @angular
|- common # |- 4.2.0
| package.json
```
Let aside potential version conflicts (which are in the hand of theuser).
The difficulty for the webpack build would be to resolve modules "by walking up the tree until it finds one". I remember from past custom webpack configs that you had to pass the location of the `node_modules`. Was it in `resolve`?
https://github.com/angular/angular-cli/blob/master/packages/%40angular/cli/models/webpack-configs/common.ts#L91
Is this even possible to resolve modules from different directories?
I think it's possible, yeah. But now that got me thinking how that happens with peer deps. In your example there's different versions of packages that really want the same version of the peer deps. So through node module resolution you'd end up getting the different versions.
But even though the module resolution might work, I think you'd get.... a static analysis error. Which is what you actually got initially:
```
$ yarn build
yarn build v0.27.5
$ ng build
Hash: a90d0476b33232a953c1
Time: 53409ms
chunk {0} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 177 kB {3} [initial] [rendered]
chunk {1} styles.bundle.js, styles.bundle.js.map (styles) 10.5 kB {3} [initial] [rendered]
chunk {2} main.bundle.js, main.bundle.js.map (main) 1.89 MB [initial] [rendered]
chunk {3} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
WARNING in ../~/@angular/compiler/@angular/compiler.es5.js
(Emitted value instead of an instance of Error) Cannot find source file 'compiler.es5.ts': Error: Can't resolve './compiler.es5.ts' in '/Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/compiler/@angular'
@ ../~/@angular/platform-browser-dynamic/@angular/platform-browser-dynamic.es5.js 7:0-72
@ ./src/main.ts
@ multi ./src/main.ts
ERROR in Error encountered resolving symbol values statically. Function calls are not supported. Consider replacing thefunction or lambda with a reference to an exported function (position 194:50 in the original .ts file), resolving symbol NgModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/core/core.d.ts, resolving symbol BrowserModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/platform-browser/platform-browser.d.ts, resolving symbol BrowserModule in /Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/platform-browser/platform-browser.d.ts
```
This wasn't just an error finding a module, it was an error finding a module while using static analysis to find lazy loaded modules (we do that on every build, not just AOT).
So I ask... what happens in your setup when you install all the `@angular/*` packages at a given level, satisfying the peerdeps? It might just work.
And if that works the only thing we need to do to support it is to relax the `You seem to not be depending on "@angular/core". This is an error.` one, by performing some better `@angular/core` checks (https://github.com/angular/angular-cli/blob/master/packages/%40angular/cli/upgrade/version.ts#L85-L115).
See where it tries to use path join with the project root? We should be able to use node module resolution instead like https://github.com/angular/angular-cli/blob/master/packages/%40angular/cli/utilities/require-project-module.ts.
@filipesilva Ack.
```
|- demo
| .angular-cli.json
|- node_modules
|- @angular # symlink to ../../node_modules/@angular
|- node_modules
|- @angular
```
```bash
$ ng build
WARNING in ../~/@angular/compiler/@angular/compiler.es5.js
(Emitted value instead of an instance of Error) Cannot find source file 'compiler.es5.ts': Error: Can't resolve './compiler.es5.ts' in '/Users/David/Projects/github/spektrakel-blog/a-glimpse-at-yarn-workspaces/node_modules/@angular/compiler/@angular'
@ ../~/@angular/platform-browser-dynamic/@angular/platform-browser-dynamic.es5.js 7:0-72
@ ./src/main.ts
@ multi ./src/main.ts
```
So still the same error.
```bash
$ ng build --preserve-symlinks
Your global Angular CLI version (1.3.0-rc.3) is greater than your local
version (1.3.0-beta.0). The local Angular CLI version is used.
To disable this warning use "ng set --global warnings.versionMismatch=false".
Hash: d97b10ba4761c0902221
Time: 19229ms
chunk {0} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 177 kB {4} [initial] [rendered]
chunk {1} main.bundle.js, main.bundle.js.map (main) 94.3 kB {3} [initial] [rendered]
chunk {2} styles.bundle.js, styles.bundle.js.map (styles) 10.5 kB {4} [initial] [rendered]
chunk {3} vendor.bundle.js, vendor.bundle.js.map (vendor) 1.8 MB [initial] [rendered]
chunk {4} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
```
So with `--preserve-symlinks` the build is fine!
Also AoT / Prod build!
```bash
$ ng build --preserve-symlinks --aot --prod
Your global Angular CLI version (1.3.0-rc.3) is greater than your local
version (1.3.0-beta.0). The local Angular CLI version is used.
To disable this warning use "ng set --global warnings.versionMismatch=false".
Hash: 31d53744fb697d9b9f87
Time: 19407ms
chunk {0} polyfills.3b4be225e7f6a233ebb3.bundle.js (polyfills) 177 kB {4} [initial] [rendered]
chunk {1} main.a772681d78c70ca6637d.bundle.js (main) 101 kB {3} [initial] [rendered]
chunk {2} styles.d41d8cd98f00b204e980.bundle.css (styles) 69 bytes {4} [initial] [rendered]
chunk {3} vendor.e839d82f8b5326dc3f41.bundle.js (vendor) 761 kB [initial] [rendered]
chunk {4} inline.6eaed4d70fc63a0f7481.bundle.js (inline) 0 bytes [entry] [rendered]
```
Ok, so this is progress. I take it the whole partial dep thing still won't work too well (for peer deps at least) but at least stuff seems to work if we tell webpack to pretend symlinks aren't really there.
BTW `--preserve-symlinks` was the work of @clydin, so big thanks to him for enabling this.
This was merged recently in TS and is probably available in nightly so I think the symlinks won't be required anymore: https://github.com/Microsoft/TypeScript/pull/16274
**Edit**: this doesn't solve what I thought it would :(
Similar to feature request #6083
@filipesilva
| I'm not really familiar with the Yarn workspaces
If you want to make some tests without having to worry about all that, a really simple monorepo for backend/frontend like the following, does not work:
```
| package.json
| node_modules/
| backend/
| frontend/
| .angular-cli.json
```
(then running from root folder `ng serve/build/test`)
just fyi, for packages that do not support the module hoisting done in yarn workspaces there is now a `nohoist` flag to prevent it https://yarnpkg.com/blog/2018/02/15/nohoist/
Not quite sure if this is related, but I get the following error:
```
Could not find API compiler-cli, function VERSION
```
I use lerna with yarn workspaces. I currently have the `angular.json` inside the subfolder of my web package:
`~/packages/web/angular.json`
Anyone found a workaround yet?
working root package.json
```
{
"private": true,
"scripts": {
...
},
"workspaces": {
"packages": [
"libs/*",
"apps/*"
],
"nohoist": [
"**/@angular*",
"**/@angular*/**"
]
}
}
```
I'm subscribing here, trying to use Yarn workspaces. `nohoist` indeed works as a temporary fix - at the expense of disk cost.
Hello,
I'm working with angular 7.2.x and I think that problem was fixed on this version, as I see here :#11685
But I still can not get it to work, and I would like to avoid using nohoist.
I have built a very simple repo showing the problem here : [demo](https://github.com/Skyndia/yarn-workspaces-test.git)
I am new to angular and even more to yarn so I may have done a mistake somewhere, but the example is very simple.
I created one yarn workspace, containing a new angular application, which can't find the @angular/ packages because the node_modules directory is in the parent (root) directory.
If someone is able to say me what isn't right, or if I still need to use nohoist, that would be very helpfull :)
Thank you !
I had a similar issue.
I would love to create a symlink with @angular in the package but I did not figure out how to do that.
So I did a global search for node_modules/@angular.
It turns out that I had a few references in the angular.json file.
I set my reference to the package in the root.
Ie.
`
"styles": [
{
"input": "./../../node_modules/@angular/material/prebuilt-themes/indigo-pink.css"
},
"src/styles.scss"
]
`.
I did this for the $schema, styles (under test and architect.)
I have compiled my project and this seems to work.
I'm having an issue I think related to this one. I'm trying Ivy, in a lerna workspace. With `ng serve` everything works fine. However, when I try to run `ng run build` throws:
```shell
Date: 2019-06-13T21:52:59.215Z
Hash: 55c7e38a65bd19305a57
Time: 9864ms
chunk {0} runtime-es5.9c308a63d02029c20228.js (runtime) 1.41 kB [entry] [rendered]
chunk {1} main-es5.4af9b61479361f268d39.js (main) 128 bytes [initial] [rendered]
chunk {2} polyfills-es5.76fb5c306a2dd7f67a99.js (polyfills) 68.1 kB [initial] [rendered]
chunk {3} styles.691cb89d8238aaa5586f.css (styles) 63.4 kB [initial] [rendered]
ERROR in ../../node_modules/@angular/material/toolbar/typings/toolbar-module.d.ts(8,22): error TS-996002: Appears in the NgModule.imports of AppMaterialModule, but could not be
resolved to an NgModule class
../../node_modules/@angular/material/toolbar/typings/toolbar-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of AppMaterialModule, but could not be resolved
to an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/button/typings/button-module.d.ts(8,22): error TS-996002: Appears inthe NgModule.imports of SharedMaterialModule, but could not be resolved to an NgModule class
../../node_modules/@angular/material/card/typings/card-module.d.ts(8,22): error TS-996002:Appears in the NgModule.imports of SharedMaterialModule, but could not be resolved to
an NgModule class
../../node_modules/@angular/material/form-field/typings/form-field-module.d.ts(8,22): error TS-996002: Appears in the NgModule.imports of SharedMaterialModule, but could not be
resolved to an NgModule class
../../node_modules/@angular/material/input/typings/input-module.d.ts(8,22): error TS-996002: Appears in the NgModule.imports of SharedMaterialModule, but could not be resolved to an NgModule class
../../node_modules/@angular/material/select/typings/select-module.d.ts(8,22): error TS-996002: Appears in the NgModule.imports of SharedMaterialModule, but could not be resolved to an NgModule class
../../node_modules/@angular/material/tabs/typings/tabs-module.d.ts(8,22): error TS-996002: Appears in the NgModule.imports of SharedMaterialModule, but could not be resolved to
an NgModule class
../../node_modules/@angular/material/button/typings/button-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be resolved to an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/card/typings/card-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be resolved to
an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/form-field/typings/form-field-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be
resolved to an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/input/typings/input-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be resolved to an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/select/typings/select-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be resolved to an NgModule, Component, Directive, or Pipe class
../../node_modules/@angular/material/tabs/typings/tabs-module.d.ts(8,22): error TS-996003: Appears in the NgModule.exports of SharedMaterialModule, but could not be resolved to
an NgModule, Component, Directive, or Pipe class
src/app/shared/shared-material.module.ts(24,14): error TS-996002: Appears in the NgModule.imports of SharedModule, but itself has errors
src/app/shared/shared-material.module.ts(24,14): error TS-996003: Appears in the NgModule.exports of SharedModule, but itself has errors
src/app/shared/shared.module.ts(13,14): error TS-996002: Appears in the NgModule.imports of AuthModule, but itself has errors
src/app/app-material.module.ts(10,14): error TS-996002: Appears in the NgModule.imports of AppModule, but itself has errors
```
I ran `ivy-ngcc` with the source set to the lerna root node_modules, and it ran fine. However, seems like the `@angular/material` it's not being upgraded.
`nohoist` - this solution is not works for me.
Because when I do `nohoist` then node_modules is create in the apps level:
+ apps
+ ng-app
- node_moduels
+ packages.
So in packages, I got an error: `Cannot find module '@angular/..`.
This is because @angular doesn't exist in the root node_modules
Any ideas how to solve this problem? please
@filipesilva angular is v9, and still doesn't support lerna/yarn? :(
Document id: 22
Predicted class = right
True class: both
Text with highlighted words
AoT ERROR in Cannot read property 'codeGen' of undefined
### OS?
OSX El Capitan, OSX Sierra, Amazon Linux on EC2
### Versions.
beta.22 / angular 2.2.3
### Repro steps.
I have a branch cut from a few weeks ago on the above versions.
Several days ago, `ng build --aot=true --target=production` works fine.
Yesterday, same command fails (log below)
Keep in mind that this is via a build server on Jenkins, which checks out a clean copy every time. To replicate, I did a clean checkout of my repo in another directory, switch to the cut branch that hasn't been modified since our last sprint, and the same issue appears.
I was able to build fine on my local workspace repo, but once I deleted my cached node_module directory and re-installed all dependencies, I get the same error below.
### The log given by the failure.
`Hash: 298b4d5b0927ce483a6d
Time: 18425ms
chunk {0} scripts.2ec7b6c32f2cecb6ef43.bundle.js, scripts.2ec7b6c32f2cecb6ef43.bundle.map (scripts) 112 kB {4} [initial] [rendered]
chunk {1} main.0ffb0f0b246299e375b6.bundle.js, main.0ffb0f0b246299e375b6.bundle.map (main) 2.02 kB {3} [initial] [rendered]
chunk {2} styles.b2328beb0372c051d06d.bundle.js, styles.fc27ae193c6cf276eb76015a2f77056b.bundle.css, styles.b2328beb0372c051d06d.bundle.map, styles.b2328beb0372c051d06d.bundle.map (styles) 69 bytes {4} [initial] [rendered]
chunk {3} vendor.0c6af147dfcf8a75bb37.bundle.js, vendor.0c6af147dfcf8a75bb37.bundle.map (vendor) 855 kB [initial] [rendered]
chunk {4} inline.d41d8cd98f00b204e980.bundle.js, inline.d41d8cd98f00b204e980.bundle.map(inline) 0 bytes [entry] [rendered]`
`ERROR in Cannot read property 'codeGen' of undefined`
`ERROR in ./src/main.ts
Module not found: Error: Can't resolve './$$_gendir/app/app.module.ngfactory' in '/Users/aaron/repos/my-clean-checkout/src'
@ ./src/main.ts 4:0-74
@ multi main`
`ERROR in ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js
Module not found: Error: Can't resolve '/Users/aaron/repos/my-clean-checkout/src/$$_gendir' in '/Users/aaron/repos/my-clean-checkout/node_modules/@angular/core/src/linker'
@ ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js 46:15-36 58:15-102
@ ./~/@angular/core/src/linker.js
@ ./~/@angular/core/src/core.js
@ ./~/@angular/core/index.js
@ ./src/main.ts
@ multi main`
### Mention any other details that might be useful.
I suspect some breaking bug might have been published to beta.22, angular 2.2.3, or one of its dependencies. NPM is fetching the same version from the public repo as before, but getting a bad version now.
Upgrading to beta.24 / angular 2.3.0+ may not be a feasible solution since I use a few libraries that hasn't been exported to be AoT compatible for 2.3.0+, but instead still works for 2.2.3. If I were to upgrade, I get the error reported at #3674
##UPDATE the internal dependency in question appears to be "@ngtools/webpack". Perhaps the solution is to pin beta.22's ngtools/webpack to version 1.1.9, instead of specifying the latest minor/patch version, which resolves to 1.2.1 according to npmjs
空@ngtools/webpack - Error encountered resolving symbol values statically
### Versions.
- angular@2.3.1
- @ngtools/webpack@1.2.1
- webpack@2.1.0-beta.28
### Repro steps.
Not really repro steps, but the not-changed webpack config used to work with angular@2.2.4 and @ngtools/webpack@1.1.9, since upgrading I get the error described below and I have no idea what to do about it.
### The log given by the failure.
```
ERROR in Error encountered resolving symbol values statically. Function calls are not supported. Consider replacing the function or lambda with a reference to an exportedfunction, resolving symbol AnimationDriver in /Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/node_modules/@angular/platform-browser/src/dom/animation_driver.d.ts, resolving symbol BrowserTestingModule in /Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/node_modules/@angular/platform-browser/testing/browser.d.ts, resolving symbol BrowserTestingModule in /Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/node_modules/@angular/platform-browser/testing/browser.d.ts
ERROR in ./src/bootstrap.ts
Module not found: Error: Can't resolve './../$$_gendir/src/app/main.module.ngfactory' in '/Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/src'
@ ./src/bootstrap.ts 3:0-83
ERROR in ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js
Module not found: Error: Can't resolve '/Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/$$_gendir' in '/Users/ajantsch/Dropbox/Projects/22_9Cookies/transmission-web-client/webkick/node_modules/@angular/core/src/linker'
@ ./~/@angular/core/src/linker/system_js_ng_module_factory_loader.js 69:15-36 85:15-102
@ ./~/@angular/core/src/linker.js
@ ./~/@angular/core/src/core.js
@ ./~/@angular/core/index.js
@ ./src/bootstrap.ts
```
Document id: 30
Predicted class = right
True class: both
Text with highlighted words
[TextField] Cursor jumps to end of input on first edit when placed in a Dialog
### Problem description
All the TextFields in my app move the cursor to the end of the input on first edit. Subsequent edits work just fine.
### Steps to reproduce
1. Create a controlled TextField with some initial contents.
2. Add a character in the middle of the initial contents
3. Observe the cursor jumping to the end of the input
If I replace the `|TextField .../|` with an `|input .../|`, the behavior stops.### Versions
- Material-UI: v0.15
- React: v15.0.2
- Browser: Several Firefox and Chrome
### Fix?
There is a hacky fix to the problem I'm observing - Comment out this line https://github.com/callemall/material-ui/blob/master/src/TextField/TextField.js#L356
```
this.setState({hasValue: isValid(event.target.value), isClean: false});
```
Then the problem goes away, although I'm sure this isn't a correct fix.
@joewalker I just tried using the controlled example in the docs, and it's working fine.
Here's my hunch as to what's up:
My TextField is in a Dialog, and it looks like Dialog works by having a second render loop, which I think means that setState might become asynchronous? If that's true, then maybe the answer is in here?
https://stackoverflow.com/questions/28922275/in-reactjs-why-does-setstate-behave-differently-when-called-synchronously/28922465#28922465
Quick video of what's going wrong:

I press 'u' at the start of the TextField and the cursor jumps to the end.
Is there any documentation to what hasValue and isClean do?
I'm happy to work out a more correct fix
I'm afraid you may have to reverse engineer it to figure that out.
I'm experiencingthe same issue with the same version. I'm observing that It's always jumps to the end if I do custom modification to the value each time a key is pressed. In my TextField I automatically add space between group ofnumbers to make it more readable. Do you have some kind of advice?
After some more research, It turn out my issue related to how react behave https://github.com/facebook/react/issues/955
@joewalker: I have replicated this issue in a dialog - it is a bit annoying!
空TextField cursor moves to end if multiline
### Problem description
After you give a multiline TextField focus. The first call to onChange causes the change to be made and the cursor skips to the end of the input.
### Steps to reproduce``` es6
import React from 'react';
import TextField from 'material-ui/TextField';
export default class TextFieldExampleControlled extends React.Component {
constructor(props) {
super(props);
this.state = {
value: 'Property Value',
};
}
handleChange = (event) =| {
this.setState({
value: event.target.value,
});
};
render() {
return (
|div||TextField
id="text-field-controlled"
value={this.state.value}
onChange={this.handleChange}
fullWidth
multiLine
rows={10}
/|
|/div|
);
}
}
```
### Versions
- Material-UI: 0.15.0
- React: 15.1.0
- Browser: chrome 51
The closest thing I could find in the code is a call to [handleInputFocus](https://github.com/callemall/material-ui/blob/master/src/TextField/TextField.js#L355) which may be assuming the cursor should be at the end of the input, or by giving the field focus the browser automatically places the cursor at the end? not sure.
Try
`|TextField
autoFocus = {true} /|
`
@jasan-s nope doesn't change the behavior. To be more specific the TextField resides within a Dialog. And the first time it's interacted with is after the area has been populated with text. The user then focuses into the TextField to change the default message and then the cursor jumps to the end of the message after the first keypress. Afterwards, and after multiple relaunches, editing the TextField works normally.
Document id: 4
Predicted class = both
True class: unrelated
Text with highlighted words
First time using Photon, First time trying to use docker-volume-vpshere as well. So far not very inspiring. Installing the plugin will crash docker, crash systemd (or something) requiring me to "hard power cycle" the VM and afterwards docker refuses to start.
Steps to reproduce:
Download Photon 1.0 Rev 2 OVA
Deploy OVA to VM
Update VM to latest packages (tdnf distro-sync -y), Reboot
After reboot, enable SSH
Login via SSH, enable docker service, docker plugin install.. and watch ..
Here is most of the output from the above steps.
root@photon-machine [ ~ ]# docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
root@photon-machine [ ~ ]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/doc ker.service.
root@photon-machine [ ~ ]# systemctl start docker
root@photon-machine [ ~ ]# docker plugin ls
ID NAME DESCRIPTION ENABLED
root@photon-machine [ ~ ]# docker --version
Docker version 1.13.1, build 092cba3
root@photon-machine [ ~ ]# docker plugin install --grant-all-permissions --alias vsphere vmware/docker-volume-vsphere:latest
latest: Pulling from vmware/docker-volume-vsphere
bc8c14d82abc: Download complete
Digest: sha256:6b81577c0502f537bc8a5ccf3f1599d9f9b7276242d7043d00d04496f312b976
Status: Downloaded newer image for vmware/docker-volume-vsphere:latest
Error response from daemon: rpc error: code = 2 desc = containerd: container not started
root@photon-machine [ ~ ]# docker plugin disable vsphere
Error response from daemon: plugin "vsphere" not found
root@photon-machine [ ~ ]# docker plugin ls
ID NAME DESCRIPTION ENABLED
root@photon-machine [ ~ ]# systemctl status docker
Failed to connect to bus: No such file or directory
root@photon-machine [ ~ ]# reboot
Failed to connect to bus: No such file or directory
Failed to talk to init daemon.
and then after a "hard reset"
root@photon-machine [ ~ ]# systemctl status docker
● docker.service - Docker Daemon
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-05-03 01:29:29 UTC; 16s ago
Docs: http://docs.docker.com
Process: 298 ExecStart=/usr/bin/docker daemon $DOCKER_OPTS --containerd /run/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 298 (code=exited, status=1/FAILURE)
May 03 01:29:28 photon-machine systemd[1]: Starting Docker Daemon...
May 03 01:29:28 photon-machine docker[298]: Command "daemon" is deprecated, and will be removed in Docker 1.16. Please run `dockerd` directly.
May 03 01:29:29 photon-machine docker[298]: Error starting daemon: couldn't create plugin manager: failed to restore plugins: error reading /var/lib/docker/plugins/fd...
May 03 01:29:29 photon-machine systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 03 01:29:29 photon-machine systemd[1]: Failed to start Docker Daemon.
May 03 01:29:29 photon-machine systemd[1]: docker.service: Unit entered failed state.
May 03 01:29:29 photon-machine systemd[1]: docker.service: Failed with result 'exit-code'.
Variations on this that also don't work.
1) Download the ISO, and do a full install (vs the OVA)
2) enable/start docker-containerd service prior to docker plugin install
3) Run the VM on ESX 6.0 vSan cluster
4) Run the VM on ESX 6.5 vSan cluster
Should this be moved/submitted to the Photon OS issue list?
+Hi @runecalico,
Sorry to hear that you run into too many issues.
| Error response from daemon: rpc error: code = 2 desc = containerd: container not started
It is likely to be a docker issue https://github.com/moby/moby/issues/22226#issuecomment-292617804
Can you please share `docker` and `docker-volume-vsphere.log` logs for further look?
just FYI: there are photon VMs with docker 1.13.1 on our CI system and plugin installation works fine. It would be great if you can share logs that helps to find root cause. By looking at the above steps it seems docker issue to me.
Thanks for sharing steps!
/CC @govint @msterin
@runecalico Sorry you had hard time getting started. We are able to reproduce the issue with latest Photon release. We will root cause and follow-up with Photon OS team (CC/ @sharathjg ). Manually upgrading to latest Docker release seems to resolve the issue;
```
tdnf install wget
wget https://test.docker.com/builds/Linux/x86_64/docker-17.05.0-ce-rc3.tgz
systemctl stop docker
tdnf install tar
tar --strip-components=1 -xvzf docker-17.05.0-ce-rc3.tgz -C /usr/bin
systemctl start docker
docker info
docker plugin install --grant-all-permissions --alias vsphere vmware/docker-volume-vsphere:latest
docker volume create -d vsphere v2
docker volume ls
docker run --rm -it -v v2:/data busybox
```
Another option is to use non-plugin version i.e. RPM version of vSphere Docker Volume Service.
```
wget https://bintray.com/vmware/vDVS/download_file?file_path=docker-volume-vsphere-0.13.15d313a-1.x86_64.rpm
rpm -ivh docker-volume-vsphere-0.13.15d313a-1.x86_64.rpm
systemctl status docker-volume-vsphere
docker volume ls
docker volume create -d vsphere v3
docker run -it -v v3:/data busybox
```
@runecalico Photon OS team has root caused the issue and working on updated OVA. Issue is tracked in [PhotonOS #640](https://github.com/vmware/photon/issues/640)
In the meantime, you can try one of alternative proposed above (manually upgrade to latest Docker release or use RPM)
Feel free to reach out to us @ containers@vmware.com if you need assistance and we can schedule time to go over features and discuss your use case.
Awesome! I'll look into the suggestions and update. Re: RPM .. Since the doc mentioned that the RPM was no longer the supported/recommended install method (and it's not in the Photon repo) I didn't even look! (doh). Re: latest docker - I expected it was something like that as I was able to get the driver working on a RedHat 7 box with the latest docker .. Thanks a bunch, I was hoping to test Photon vs RHEL as container hosts in our Vmware environment ..
@pdhamdhere
For an install that has already gone south, the only option seems to be to completely uninstall docker and cleanup any docker locations (like /var/lib/docker and /etc/docker) and then re-install. At that point.. Both recommendations worked for me. Thanks for the help on getting this working, much appreciated.
Thanks @runecalico Keep us posted with results from your experiments and let us know if you run into any issues.空Not sure what I´m missing here, trying to build on future release:
docker : hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1)
At C:\Program Files\WindowsPowerShell\Modules\navcontainerhelper\0.7.0.17\ContainerHandling\New-NavImage.ps1:301 char:1
+ docker build --isolation=$isolation --memory $memory --tag $imageName ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (hcsshim::Prepar...function. (0x1):String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
**Scripts used to create container and cause the issue**
```
New-BcContainer `
-accept_eula `
-containerName $containerName `
-credential $credential `
-auth $auth `
-artifactUrl $artifactUrl `
-imageName 'mybcimage' `
-assignPremiumPlan `
-licenseFile $licenseFile `
-updateHosts `
-alwaysPull
**Full output of scripts**
NavContainerHelper is version 0.7.0.17
NavContainerHelper is running as administrator
Host is Microsoft Windows 10 Enterprise - 1909
Docker Client Version is 19.03.8
Docker Server Version is 19.03.8
Removing C:\ProgramData\NavContainerHelper\Extensions\bc-future
ArtifactUrl and ImageName specified
Image mybcimage:sandbox-17.0.14873.0-us doesn't exist
Building image mybcimage:sandbox-17.0.14873.0-us based on https://bcinsider.azureedge.net/sandbox/17.0.14873.0/us
Pulling latest image mcr.microsoft.com/dynamicsnav:10.0.18363.836-generic
10.0.18363.836-generic: Pulling from dynamicsnav
Generic Tag: 0.1.0.7
Container OS Version: 10.0.18363.836 (1909)
Host OS Version: 10.0.18363.836 (1909)
Copying Platform Artifacts
Copy Database
Copy Licensefile
Copy ConfigurationPackages
Copy Extensions
Copy Applications.*
c:\bcartifacts.cache\tmp637310111203645253
Sending build context to Docker daemon 1.745GB
Step 1/6 : FROM mcr.microsoft.com/dynamicsnav:10.0.18363.836-generic
---| a6a2db9cc5a2
Step 2/6 : ENV DatabaseServer=localhost DatabaseInstance=SQLEXPRESS DatabaseName=CRONUS IsBcSandbox=Y artifactUrl=https://bcinsider.azureedge.net/sandbox/17.0.14873.0/us?tokenremoved
---| Running in e757b8ab3bb8
**Additional context**
happens everytime but first time trying the new setup to create docker containers
+This output:
```
NavContainerHelper is version 0.6.5.7
NavContainerHelper is running as administrator
Host is Microsoft Windows 10 Enterprise - 2004
Docker Client Version is 19.03.8
Docker Server Version is 19.03.8
```
Doesn't seem to be from your machine???
What containerhelper version are you running?
And does it work without the imagename?
Thanks
Sorry I forgot to delete the default example in the git issue(I updated the issue)
, but I´m running
NavContainerHelper is version 0.7.0.17
NavContainerHelper is running as administrator
Host is Microsoft Windows 10 Enterprise - 1909
Docker Client Version is 19.03.8
Docker Server Version is 19.03.8
and running without imagename gives me this error
DockerDo : docker.exe: Error response from daemon: hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1).
ExitCode: 125
Commandline: docker run --volume c:\bcartifacts.cache:c:\dl --label nav= --env isBcSandbox=Y --label version=17.0.14873.0 --label platform=16.0.14638.0 --label country=US --env artifactUrl=https://bcinsider.azureedge.net/sandbox/17.0.14873.0/us?tokenremoved --env licenseFile="c:\run\my\license.flf" --name bc-future --hostname bc-future --env auth=NavUserPassword --env username="
admin" --env ExitOnError=N --env locale=en-US --env databaseServer="" --env databaseInstance="" --volume "C:\ProgramData\NavContainerHelper:C:\ProgramData\NavContainerHelper" --volume "C:\ProgramData\NavContainerHelper\Extensions\bc-future\my:C:\Run\my" --i
solation process --restart unless-stopped --env enableApiServices=Y --env useSSL=N --volume "c:\windows\system32\drivers\etc:C:\driversetc" --env securePassword= --env passwordKeyFile="c:\run\my\aes.key" --env removePasswordKeyFile=Y --en
v accept_eula=Y --env accept_outdated=Y --detach mcr.microsoft.com/dynamicsnav:10.0.18363.836-generic
At C:\Program Files\WindowsPowerShell\Modules\navcontainerhelper\0.7.0.17\ContainerHandling\New-NavContainer.ps1:1494 char:19
+ ... if (!(DockerDo -accept_eula -accept_outdated:$accept_outdated - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,DockerDo
Please try to do a Reset to factory defaults of your docker desktop installation and retry.
I have reset to factory defautls but still same errors
with imagename:
docker : hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1)
At C:\Program Files\WindowsPowerShell\Modules\navcontainerhelper\0.7.0.18\ContainerHandling\New-NavImage.ps1:301 char:1
+ docker build --isolation=$isolation --memory $memory --tag $imageName ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (hcsshim::Prepar...function. (0x1):String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Without image name:
DockerDo : docker: Error response from daemon: hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1).
ExitCode: 125
Commandline: docker run --volume c:\bcartifacts.cache:c:\dl --label nav= --env isBcSandbox=Y --label version=17.0.14873.0 --label platform=16.0.14638.0 --label country=US --env artifactUrl=https://bcinsider.azureedge.net/sandbox/17.0.14873.0/us? --env licenseFile="c:\run\my\license.flf" --name bc-future --hostname bc-future --env auth=NavUserPassword --env username="
admin" --env ExitOnError=N --env locale=en-US --env databaseServer="" --env databaseInstance="" --volume "C:\ProgramData\NavContainerHelper:C:\ProgramData\NavContainerHelper" --volume "C:\ProgramData\NavContainerHelper\Extensions\bc-future\my:C:\Run\my" --i
solation process --restart unless-stopped --env enableApiServices=Y --env useSSL=N --volume "c:\windows\system32\drivers\etc:C:\driversetc" --env securePassword== --env passwordKeyFile="c:\run\my\aes.key" --env removePasswordKeyFile=Y --en
v accept_eula=Y --env accept_outdated=Y --detach mcr.microsoft.com/dynamicsnav:10.0.18363.836-generic
At C:\Program Files\WindowsPowerShell\Modules\navcontainerhelper\0.7.0.18\ContainerHandling\New-NavContainer.ps1:1502 char:19
+ ... if (!(DockerDo -accept_eula -accept_outdated:$accept_outdated - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,DockerDo
Error 125 is always something on the host - antivirus, or uninstall/re-install - it is very hard to troubleshoot.
Have you tried -isolation hyperv
Yes I also tried -isolation hyperv
okey I will try systematically to disable stuff to see if I can find the issue
Okey I found the issue through another tread https://github.com/docker/for-win/issues/3884
the issue was related to a driver file cbfs6.sys which is located c:\windows\system32\drivers\' folder and which is part of Callback File System signed by EldoS Corporation, seems to be a driver which is shared with docker but no ideas why this driver was not uninstalled it was actually a legacy, which I checked with
C:\Windows\system32| fltmc
So after removing this file my error went away
Wow - thanks for sharing
I would never have been able to help with that.
Document id: 24
Predicted class = both
True class: unrelated
Text with highlighted words
I’m seeing a kubelet issue where cAdvisor will lose visibility to running docker containers over time (usually starts to happen within 24h of startup). When kubelet first starts, cAdvisor is showing all running containers in the `container_last_seen` metric, but at some point it will begin to only show
```
# HELP container_last_seen Last time a container was seen by the exporter
# TYPE container_last_seen gauge
container_last_seen{id="/"} 1.474463935e+09
```
But I can confirm my containers are running and healthy.
I see these errors in the kubelet log around the time when the containers get "lost":
```
Sep 21 13:08:43 cmce-k8s-worker-1.novalocal kubelet[1438]: W0921 13:08:43.351974 1438 raw.go:86] Error while processing event ("/var/lib/rkt/pods/run/08697cb1-ae87-484f-bec5-272f351794f5/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/cpu,cpuacct/system.slice/s
Sep 21 13:08:43 cmce-k8s-worker-1.novalocal kubelet[1438]: W0921 13:08:43.356112 1438 raw.go:86] Error while processing event ("/var/lib/rkt/pods/run/08697cb1-ae87-484f-bec5-272f351794f5/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/blkio/system.slice/system-
Sep 21 13:08:43 cmce-k8s-worker-1.novalocal kubelet[1438]: W0921 13:08:43.356183 1438 raw.go:86] Error while processing event ("/var/lib/rkt/pods/run/08697cb1-ae87-484f-bec5-272f351794f5/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory/system.slice/system
```
I running my cluster in OpenStack. Other environment details below.
`# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"283137936a498aed572ee22af6774b6fb6e9fd94", GitTreeState:"clean", BuildDate:"2016-07-01T19:26:38Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"283137936a498aed572ee22af6774b6fb6e9fd94", GitTreeState:"clean", BuildDate:"2016-07-01T19:19:19Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}`
``````
docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64```
``````
+@agsmith using rkt? i spotted a `/var/lib/rkt/pods/run` in the log. if so... cc'ing @kubernetes/sig-rktnetes
@dims we are using docker for our container engine, but we are using flannel as an overlay network on Openstack which I thought was running as a service on the coreos node.
I'm also getting this issue
```
$ cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1122.2.0
VERSION_ID=1122.2.0
BUILD_ID=2016-09-06-1449
PRETTY_NAME="CoreOS 1122.2.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
```
```
$ docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.4
Git commit: 1f8f545
Built:
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.4
Git commit: 1f8f545
Built:
OS/Arch: linux/amd64
```
```
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6+ae4550c", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"not a git tree", BuildDate:"2016-08-30T15:45:51Z", GoVersion:"go1.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:08:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
```
Here's the same error I see without the lines being truncated:
```
Sep 24 09:07:29 ip-172-31-25-17.us-west-2.compute.internal kubelet[408]: W0924 09:07:29.052434 408 raw.go:86] Error while processing event ("/var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/cpu,cpuacct/system.slice/proc-sys-fs-binfmt_misc.mount": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/cpu,cpuacct/system.slice/proc-sys-fs-binfmt_misc.mount: no such file or directory
Sep 24 09:07:29 ip-172-31-25-17.us-west-2.compute.internal kubelet[408]: W0924 09:07:29.053603 408 raw.go:86] Error while processing event ("/var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/blkio/system.slice/proc-sys-fs-binfmt_misc.mount": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/blkio/system.slice/proc-sys-fs-binfmt_misc.mount: no such file or directory
Sep 24 09:07:29 ip-172-31-25-17.us-west-2.compute.internal kubelet[408]: W0924 09:07:29.054019 408 raw.go:86] Error while processing event ("/var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory/system.slice/proc-sys-fs-binfmt_misc.mount": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /var/lib/rkt/pods/exited-garbage/c07c22d6-52af-4c54-96d2-df325714944d/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory/system.slice/proc-sys-fs-binfmt_misc.mount: no such file or directory
```
I am using the docker driver, but I am using CoreOS's flanneld which is running a rkt container
@agsmith any chance you're using the [Prometheus node exporter](https://github.com/prometheus/node_exporter)?
Right before the above log, I see the journal message
```
proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 23580 (node_exporter)
```
After turning off the prometheus node_exporter, cadvisor has stayed up continuously.
AFAICS the `no such file or or directory` error should just not matter as these cgroups don't belong to any containers that cadvisor is going to monitor.
/cc @timstclair @vish for more help.
Can you check whether the containers are still present in the cAdvisor API? (`|kubelet|:4194/api/v2.1/stats/`)
Also, check the summary API: `|kubelet|:10250/stats/summary`
Piggy-backing on this, as I'm seeing the same problem with a v1.3.7 cluster running on CoreOS 1122.2.0 in EC2. Also using docker as the container runtime, with flanneld for overlay. Kubelet is run using the rkt wrapper. Prometheus node exporter runs directly on each host, but I don't see the log message @micahhausler reported.
cAdvisor API:
```
core@ip-10-112-10-173 ~ $ curl localhost:4194/api/v2.1/stats/
{}
```
Summary API:
```
{
"node": {
"nodeName": "ip-10-112-10-173.ec2.internal",
"startTime": null,
"memory": {
"time": "2016-09-27T17:55:42Z",
"availableBytes": 16045121536,
"usageBytes": 0,
"workingSetBytes": 0,
"rssBytes": 0,
"pageFaults": 0,
"majorPageFaults": 0
},
"network": {
"time": "2016-09-27T17:55:42Z",
"rxBytes": 3493924855,
"rxErrors": 0,
"txBytes": 2625671849,
"txErrors": 0 },
"fs": {
"availableBytes": 119724417024,
"capacityBytes": 130681061376,
"usedBytes": 5519851520
},
"runtime": {
"imageFs": {
"availableBytes": 119724417024,
"capacityBytes": 130681061376,
"usedBytes": 3699191916
}
}
},
"pods": []
}
```
This behavior began when I upgraded to v1.3.7 from v1.3.5.
I see the same results as @mgoodness as well as the log messages originally reported using k8s 1.3.6, ec2, and coreos 1122.2.0
An interesting note is that if I restart the kubelet (running directly on the box) I no longer see the error log messages, my Summary API actually lists pods, and my cAdvisor endpoint is returns an empty json object.
I re-read the logs from the kubelet from before and after I restarted it. When the kubelet originally started up it mentioned that CPU and Memory Accounting were disabled as reported [here](https://github.com/kubernetes/kubernetes/blob/3aa72fa4804938818562f5efbaa2054f7bb919f3/pkg/kubelet/cm/container_manager_linux.go#L668) from cAdvisor. This appears to be expected behavior with the systemd cgroup. After restart however, this message is no longer printed and the summary endpoint is working.
Interestingly, reverting to v1.3.5 through a complete replacement of controllers | workers didn't resolve the issue. Prometheus will scrape pod metrics from the kubelets for almost exactly 12 hours, at which time the pod metrics disappear. Restarting kubelets makes the pods visible for another 12 hours.
I'm not yet seeing the issue on a pilot cluster running v1.4.0.
I got the same issue with kubernetes 1.2.0, coreos 1122.2.0, docker 1.10.3 after I changed flanneld backend from `udp` to `host-gw` to improve network throughput and latency. Kubelet stop returning running containers after about 12 hours. Restarting kubelet seems to bring it back, but I'll keep eyes to see if the issue repeats.
I'm seeing the same. CoreOS 1185.1.0, docker 1.11.2, Kube 1.3.5.
After restarting kubelet, I've noticed that the `http://|node|:4194/api/v2.0/stats/` URL returns a large json blob including values for cpu/memory usage for each container.
However, after cAdvisor stops working as expected, cpu/memory usage metrics are returned as all zeros (0) for each container.
Is this related to default CoreOS cgroup location?
I'm using kubernetes 1.2.4, installed in AWS with kube-up and I'm seeing the same thing.
/cc @kubernetes/sig-node
I'm seeing same issue. CoreOS 1185.3.0, docker 1.11.2, kube 1.4.6.
@tarvip restarting the `kubelet` service worked for us (did not see stats loss for a few days now) - `systemctl restart kubelet`. The only lead we have is it happened after ~12 hours after a kubernetes upgrade + node replacement.
Confirming that I also see this.
CoreOS 1185.3.0, docker 1.11.2, kubelet 1.4.5
I just restarted all our kubelets which fixed it. I'll set a reminderto check some nodes in ~12 hours.
I'm reasonably confident something about crash looping pods triggers this more rapidly.
```
Nov 29 01:52:01 ip-10-0-119-11.us-west-2.compute.internal systemd[1]: Started Kubernetes Kubelet.
Nov 29 01:52:01 ip-10-0-119-11.us-west-2.compute.internal kubelet[4338]: I1129 01:52:01.815109 4338 aws.go:745] Building AWS cloudprovider
| time passes |
Nov 29 03:57:40 ip-10-0-119-11.us-west-2.compute.internal kubelet[4338]: W1129 03:57:40.466645 4338 raw.go:87] Error while processing event ("/var/lib/rkt/pods/run/b4a28046-81e7-4edb-b0b7-3b877b9ffb10/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory/system.slice/var-lib-rkt-pods-run-2ddc6fe7\\x2d0cc1\\x2d4392\\x2da60a\\x2dd3f19fea3c9f-stage1-rootfs-opt-stage2-flannel-rootfs-run-flannel.mount": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /var/lib/rkt/pods/run/b4a28046-81e7-4edb-b0b7-3b877b9ffb10/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory/system.slice/var-lib-rkt-pods-run-2ddc6fe7\x2d0cc1\x2d4392\x2da60a\x2dd3f19fea3c9f-stage1-rootfs-opt-stage2-flannel-rootfs-run-flannel.mount: no space left on device
```
This [looks related](https://github.com/kubernetes/kubernetes/issues/10421) (but unresolved). Going to try it out on a couple of nodes and see if that's the case.
Can you capture the output of `mount` and `sudo rkt list` on animpacted node?
This could be related to https://github.com/coreos/bugs/issues/1612, whereby a crashlooping fly pod (such as flanneld or kubelet) is not gc'd quickly enough and creates a large number of mounts.
The change fixing that was included in CoreOS version 1214.0.0, so the versions mentioned above would be impacted if `flanneld` or `kubelet` restart semi-frequently.
The output of `lsof` could also help.
rkt list (this node has only been up for a day):
```
UUID APP IMAGE NAME STATE CREATED STARTED NETWORKS
2cc79601 flannel quay.io/coreos/flannel:v0.6.2-amd64 running 1 day ago 1 day ago
```
Mounts:
```
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=30902148k,nr_inodes=7725537,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
/dev/xvda9on / type ext4 (rw,relatime,seclabel,data=ordered)
/dev/xvda3 on /usr type ext4 (ro,relatime,seclabel,block_validity,delalloc,barrier,user_xattr,acl)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=45188)
tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel)
debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel)
systemd-1 on /boot type autofs (rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=45207)
xenfs on /proc/xen type xenfs (rw,relatime)
/dev/xvda6 on /usr/share/oem type ext4 (rw,nodev,relatime,seclabel,commit=600,data=ordered)
/dev/xvda1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
overlay on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs type overlay (rw,relatime,context="system_u:object_r:svirt_lxc_file_t:s0:c283,c735",lowerdir=/var/lib/rkt/cas/tree/deps-sha512-0f0c8dd425b44765e886b7b24a97530dd6ca2eca126b923db96293dd9630237a/rootfs,upperdir=/var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/overlay/deps-sha512-0f0c8dd425b44765e886b7b24a97530dd6ca2eca126b923db96293dd9630237a/upper,workdir=/var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/overlay/deps-sha512-0f0c8dd425b44765e886b7b24a97530dd6ca2eca126b923db96293dd9630237a/work)
overlay on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs type overlay (rw,relatime,context="system_u:object_r:svirt_lxc_file_t:s0:c283,c735",lowerdir=/var/lib/rkt/cas/tree/deps-sha512-fc53a0659ffcb73ab470371a2b8dfa29df8e65aa31b2196565a77b9ac691e8b1/rootfs,upperdir=/var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/overlay/deps-sha512-fc53a0659ffcb73ab470371a2b8dfa29df8e65aa31b2196565a77b9ac691e8b1/upper/flannel,workdir=/var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/overlay/deps-sha512-fc53a0659ffcb73ab470371a2b8dfa29df8e65aa31b2196565a77b9ac691e8b1/work/flannel)
devtmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/dev type devtmpfs (rw,nosuid,seclabel,size=30902148k,nr_inodes=7725537,mode=755)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
mqueue on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/dev/mqueue type mqueue (rw,relatime,seclabel)
hugetlbfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/dev/hugepages type hugetlbfs (rw,relatime,seclabel)
proc on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/proc type proc (rw,nosuid,nodev,noexec,relatime)
systemd-1 on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=45188)
xenfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/proc/xen type xenfs (rw,relatime)
sysfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
securityfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
pstore on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)
selinuxfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/fs/selinux type selinuxfs (rw,relatime)
debugfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/sys/kernel/debug type debugfs (rw,relatime,seclabel)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/tmp type tmpfs (rw,relatime,seclabel)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/run/systemd type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/run/flannel type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
/dev/xvda9 on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/etc/ssl/etcd type ext4 (ro,relatime,seclabel,data=ordered)
/dev/xvda3 on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/etc/ssl/certs type ext4 (ro,relatime,seclabel,block_validity,delalloc,barrier,user_xattr,acl)
tmpfs on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/etc/resolv.conf type tmpfs (ro,seclabel,mode=755)
/dev/xvda9 on /var/lib/rkt/pods/run/2cc79601-3559-4b53-adc8-08b23bc901f6/stage1/rootfs/opt/stage2/flannel/rootfs/etc/hosts type ext4 (ro,relatime,seclabel,data=ordered)
tmpfs on /var/lib/kubelet/pods/82a8a10c-b7fc-11e6-9383-0a24662f8847/volumes/kubernetes.io~secret/default-token-34qq7 type tmpfs (rw,relatime,rootcontext=system_u:object_r:var_lib_t:s0,seclabel)
tmpfs on /var/lib/kubelet/pods/82ab3b28-b7fc-11e6-9383-0a24662f8847/volumes/kubernetes.io~secret/default-token-gqnut type tmpfs (rw,relatime,rootcontext=system_u:object_r:var_lib_t:s0,seclabel)
tmpfs on /var/lib/kubelet/pods/1eb5e64d-b80c-11e6-9383-0a24662f8847/volumes/kubernetes.io~secret/default-token-je8yx type tmpfs (rw,relatime,rootcontext=system_u:object_r:var_lib_t:s0,seclabel)
us-west-2b.fs-[redacted].efs.us-west-2.amazonaws.com:/ on /var/lib/kubelet/pods/1eb5e64d-b80c-11e6-9383-0a24662f8847/volumes/kubernetes.io~nfs/stuff type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.124.222,local_lock=none,addr=10.0.108.59)
tmpfs on /var/lib/kubelet/pods/173ca072-b80f-11e6-9383-0a24662f8847/volumes/kubernetes.io~secret/default-token-zxy2u type tmpfs (rw,relatime,rootcontext=system_u:object_r:var_lib_t:s0,seclabel)
us-west-2b.fs-[redacted].efs.us-west-2.amazonaws.com:/ on /var/lib/kubelet/pods/173ca072-b80f-11e6-9383-0a24662f8847/volumes/kubernetes.io~nfs/preprod-b type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.124.222,local_lock=none,addr=10.0.100.128)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=6183636k,mode=700,uid=1000,gid=1000)
```
lsof counts `sudo lsof | awk '{print $1}' | sort | uniq -c | sort -n`
```
120 fleetd
127 systemd
132 kworker/1
132 kworker/2
288 node-prob
633 dragent
780 flanneld
984 logshipper
984 kube-prox
1326 etcd2
1470 kubelet
3138 photoprocessor
3773 java
3829 docker
4219 container
```
To be clear, that's the state after the problem manifested on that node, right?
Yup, that's on a non-functional node. The easiest way I've found to confirm that, or rather what I'm using as a check to alert me is `curl -s localhost:10255/stats/summary | jq -Mr '.pods | any'`. If that pods array is empty then restart the kubelet.
(`kubectl top pod [pod on that node]` also fails on that node)
I'm seeing the same issue with CoreOS 1185.5.0, Docker 1.11.2 and Kubernetes 1.4.6.
After restarting kubelet, it starts working including values for cpu/memory usage for each container. Otherwise, after 24 hours it only shows cluster level data but not the container/pod stats.
Looks like what [cadvisor#1573](https://github.com/google/cadvisor/pull/1573) fixes.
I'm still seeing the same. CoreOS 1298.7.0, docker 1.12.6, Kube 1.5.6.
After restarting kubelet, I've noticed that the http://|node|:4194/api/v2.0/stats/ URL returns a large json blob including values for cpu/memory usage for each container.
However, after cAdvisor stops working as expected, cpu/memory usage metrics are returned as all zeros (0) for each container.
After restarting the kubelet, I get cpu/memory data again...
Guys, any chance this bug will be finally fixed in 1.5.x (1.4.x would be even better)? So much advertised on Kubecon 2017 horizontal pod autoscaler just doesn't work because of this bug - isn't it a shame for the whole kubernetes as a product? I don't think cron to restart kubelet on `curl -s localhost:10255/stats/summary | jq -Mr '.pods | any'` is a viewable workaround...
Also in: `v1.6.4`
```
$ cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1353.7.0
VERSION_ID=1353.7.0
BUILD_ID=2017-04-26-2154
PRETTY_NAME="Container Linux by CoreOS 1353.7.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
```
```
$ docker version
Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: d5236f0
Built: Wed Apr 26 21:47:57 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: d5236f0
Built: Wed Apr 26 21:47:57 2017
OS/Arch: linux/amd64
```
i'm seeing this in 1.7.4 (CentOS Linux release 7.3.1611 (Core), 4.9.13-1)
i can curl the metrics endpoint continuously and see the stats appear and disappear (missing entries) even though the containers are stable.
@laverite Theres a long discussion going on google/cadvisor#1704 about that issue. I think the titled issue is separate
@micahhausler that fits my problem much better. thanks for pointing me in the right direction!空|!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--|
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one): Maybe both? Either a documentation bug or a feature request to handle restrictive firewalls?
|!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
--|
Please provide the following details:
**Environment**:
```
minikube version: v0.28.2
OS:
PRETTY_NAME="Debian GNU/Linux buster/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
VM driver:
"DriverName": "virtualbox",
ISO version
"Boot2DockerURL": "file:///home/brian/.minikube/cache/iso/minikube-v0.28.1.iso",
```
**What happened**: When running `minikube start`, minikube does a bunch of stuff, and then fails with:
```
Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
```
or (on subsequent attempts to start minikube):
```
error getting Pods with label selector "k8s-app=kube-proxy" [Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy: dial tcp 192.168.99.100:8443: i/o timeout]
```
**What you expected to happen**: minikube would successfully set up and start a cluster
**How to reproduce it** (as minimally and precisely as possible):
Set up some fairly aggressive iptables rules. The key bit is that the `INPUT` chain should default to `DROP` (IMO a reasonable thing to do in the name of security), but specific rules in the chain should still allow normal traffic to work. Something like this will *probably* do the trick (haven't tested, but I think this is a 'safe' subset of my rules):
```
iptables -P INPUT DROP
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i $NAME_OF_YOUR_NETWORK_INTERFACE -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT
iptables -A INPUT -m pkttype --pkt-type broadcast -j ACCEPT
```
**Output of `minikube logs` (if applicable)**:
```
$ minikubestart --vm-driver virtualbox --logtostderr --loglevel 0 W0822 17:53:36.374904 21879 root.go:148] Error reading config file at /home/brian/.minikube/config/config.json: open /home/brian/.minikube/config/config.json: no such file or directory
I0822 17:53:36.375018 21879 notify.go:121] Checking for updates...
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
I0822 17:53:36.507424 21879 cluster.go:69] Machine does not exist... provisioning new machine
I0822 17:53:36.507437 21879 cluster.go:70] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.28.1.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:virtualbox HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false}
I0822 17:53:36.507512 21879 downloader.go:56] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v0.28.1.iso
I0822 17:54:34.943926 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/ca.pem
I0822 17:54:35.058237 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0822 17:54:35.130934 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server.pem
I0822 17:54:35.183997 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
I0822 17:54:35.260552 21879 ssh_runner.go:57] Run: sudo rm -f /etc/docker/server-key.pem
I0822 17:54:35.315520 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/docker
Getting VM IP address...
Moving files into cluster...
I0822 17:54:37.909026 21879 kubeadm.go:208] Container runtime flag provided with no value, using defaults.
I0822 17:54:37.909107 21879 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubeadm
I0822 17:54:37.909133 21879 ssh_runner.go:57] Run: sudo rm -f /usr/bin/kubelet
I0822 17:54:38.007898 21879 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0822 17:54:38.007898 21879 ssh_runner.go:57] Run: sudo mkdir -p /usr/bin
I0822 17:54:41.575516 21879 ssh_runner.go:57] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0822 17:54:41.623957 21879 ssh_runner.go:57] Run: sudo mkdir -p /lib/systemd/system
I0822 17:54:41.714413 21879 ssh_runner.go:57] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0822 17:54:41.775677 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0822 17:54:41.857335 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/kubeadm.yaml
I0822 17:54:41.923684 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib
I0822 17:54:41.995464 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0822 17:54:42.043847 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.060140 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0822 17:54:42.107383 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.159549 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml
I0822 17:54:42.208005 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/manifests/
I0822 17:54:42.286221 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-dp.yaml
I0822 17:54:42.343484 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.463757 21879 ssh_runner.go:57] Run: sudo rm -f /etc/kubernetes/addons/dashboard-svc.yaml
I0822 17:54:42.523774 21879 ssh_runner.go:57] Run: sudo mkdir -p /etc/kubernetes/addons
I0822 17:54:42.655665 21879 ssh_runner.go:57] Run:
sudo systemctl daemon-reload ||
sudo systemctl enable kubelet ||
sudo systemctl start kubelet
Setting up certs...
I0822 17:54:42.883379 21879 certs.go:47] Setting up certificates for IP: 192.168.99.100
I0822 17:54:42.894847 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.crt
I0822 17:54:42.943360 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:42.997602 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/ca.key
I0822 17:54:43.047369 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.100806 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.crt
I0822 17:54:43.147333 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.199497 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/apiserver.key
I0822 17:54:43.243371 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.299688 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.crt
I0822 17:54:43.351535 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.471891 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client-ca.key
I0822 17:54:43.539445 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.594391 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.crt
I0822 17:54:43.647541 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.732779 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/certs/proxy-client.key
I0822 17:54:43.783377 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube/certs/
I0822 17:54:43.837781 21879 ssh_runner.go:57] Run: sudo rm -f /var/lib/localkube/kubeconfig
I0822 17:54:43.887435 21879 ssh_runner.go:57] Run: sudo mkdir -p /var/lib/localkube
Connecting to cluster...
Setting up kubeconfig...
I0822 17:54:44.135433 21879 config.go:101] Using kubeconfig: /home/brian/.kube/config
Starting cluster components...
I0822 17:54:44.136803 21879 ssh_runner.go:80] Run with output:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI ||
sudo /usr/bin/kubeadm alpha phase addon kube-dns
E0822 17:57:48.615897 21879 start.go:300] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
```
**Anything else do we need to know**:
Resetting the default policy for the `INPUT` chain to `ACCEPT` (`iptables -P INPUT ACCEPT`) causes the problem to go away and the cluster to start successfully.
I'm not sure how feasible this is, but it would be great if minikube could add the required specific `ACCEPT` rules to the `INPUT `chain to allow all this to work with a default policy of `DROP`. At the very least, updating the Linux-specific docs to call out this issue and the workaround would be great.
I *think* I could also make my rules a little more generic, by not specifically allowing only traffic from my WiFi interface, but it'd be great to not have to do this.
++1 to your idea of documenting what kind of iptables configuration is required to make this work.
This error is reported in multiple gh issues. I am not sure if the solution I posted in 3022 is valid for your particular issue but I'll leave it here for your review.
https://github.com/kubernetes/minikube/issues/3022#issuecomment-424145410
I saw the same error when starting minikube `v0.30.0` (that runs k8s `v1.10.0`) on MacOS `Mojave 10.14.1` with VirtualBox `5.2.22`:
```
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1201 00:55:03.176754 56344 start.go:297] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
```
Per @iuliancorcoja's comment in https://github.com/kubernetes/minikube/issues/3022#issuecomment-425495721, deleting `vboxnet0` in the VirtualBox GUI under `Global Tools --| Host Network Manager` resolved this issue for me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with `/remove-lifecycle stale`.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with `/close`.
Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta).
/lifecycle stale
This is now documented at https://github.com/kubernetes/minikube/blob/master/docs/networking.md
Document id: 43
Predicted class = both
True class: unrelated
Text with highlighted words
**Description**
I noticed some problems with DNS resolution inside a overlay network and Swarm.
DNS entries are not always updated automatically.
I have 10 containers over 4 hosts on Ubuntu 16.04 connected by Swarm. I created an overlay network for those containers.
When I redeploy one of those containers (I stop the current one, rename it to OLD, and create a new one with the same name), the container will not always have the same IP as before (which is not a problem). But It looks like it does not always update the DNS entry for the others containers in the network. The new created container is then unreachable from other one.
My docker version is 1.13.0.
**Steps to reproduce the issue:**
1. Create a Swarm architecture with multiple hosts
2. Create a overlay network
3. Deploy few containers with specific names on each nodeand attach them to this network
4. Remove one of this container and create exactly the same one.
**Describe the results you received:**
If IP of this new container has changed, the DNS entry will not be updated automatically for others containers. If you try to ping this new container dns name from others containers, sometimes you will notice that the resolved IP is actually the IP of the previous removed container.
**Describe the results you expected:**
DNS entries should be updated for every containers when these last ones have their IP changed.
**Additional information you deem important (e.g. issue happens only occasionally):**
**Output of `docker version`:**
```
Docker version 1.13.0, build 49bf474
```
**Output of `docker info`:**
```
Containers: 14
Running: 10
Paused: 0
Stopped: 4
Images: 449
Server Version: 1.13.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 571
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: okiqm8slow52nm4rx8qt08rpc
Is Manager: true
ClusterID: 7b3cohqvxgp3q9qm19xq4dj97
Managers: 2
Nodes: 4
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 172.17.10.83
Manager Addresses:
172.17.1.224:2377
172.17.10.83:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-43-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 19.61 GiB
Name: piranha
ID: 3SY6:AAEL:NLUO:4BTD:U5ZK:AMWA:PNGQ:4ZVM:F7S4:7GFH:E2KG:V32H
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
172.17.11.100:5000
127.0.0.0/8
Live Restore Enabled: false
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
+Hi,
Any news about this issue?
@ggaugry Are you using swarm mode with `attachable` networks or the classic swarm (which also needs an external KV store) ?
@sanimej : I use swarm mode with attachable option.
I still have the problem. Results of a dig for DNS "mysql":
root@24c350f685ef:/home/nightmare# dig mysql
; |||| DiG 9.9.5-9+deb8u10-Debian |||| mysql
;; global options: +cmd
;; Got answer:
;; -||HEADER||- opcode: QUERY, status: NOERROR, id: 13379
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;mysql. IN A
;; ANSWER SECTION:
mysql. 600 IN A 192.168.14.17
mysql. 600 IN A 192.168.14.25
mysql. 600 IN A 192.168.14.8
;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Mon May 22 09:44:01 UTC 2017
;; MSG SIZE rcvd: 86
I think I ran into this one myself. Overlay networks were managed by swarm mode and attachable. We migrated containers from running via docker-compose and classic swarm to deploying services in swarm mode. After this migration, from one node it was resolving what I believe was the old address along with the new VIP address. The other node only resolved the new VIP address.
Recreating the container that happened to have the invalid IP and the service with the DNS name we were trying to correct (using `docker service update --force`) did not remove the stale mapping. In the end we bounced the docker daemon with the bad DNS mapping and it came back up with the correct entries.
Our Docker version:
```
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:36:45 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:36:45 2017
OS/Arch: linux/amd64
Experimental: false
```
Our servers are running RHEL 7.2 with the 3.10.0-327 kernel.
Unfortunately I don't believe we can reliably recreate this, and it happened in production during a limited outage window so I didn't have time to gather logs.
As best I can tell, there are situations where stopping a container on an attachable overlay network doesn't propagate the change the other nodes in the swarm.
Hi,
Does anyone have news about this issue ? I read in another thread that it´s a know issue and one guy from docker was working to develop a patch, but I don´t have additional information.
Our version is thesame 17.03.1-ce
@danielmrosa 17.06 has a lot of patches and should work much better all-around wrt service discovery and networking.
GA is imminent but RC's are available on the `test` channel.
@cpuguy83
Hi Brian, thanks a lot for your feedback, let´s try 17.06
Hi all
Still the same issue with docker 17.06.01-ce:
Exemple inside one container:
root@b617061009ee:/# getent hosts bazoocaster-sfr0092vu 192.168.14.20 bazoocaster-sfr0092vu
192.168.14.10 bazoocaster-sfr0092vu
When does it happens?
We realised this happens when we redeploy a new container version. Exemple:
- Stop container with name "test"
- Disconnect it from the overlay network
- Rename it with name "test_OLD"
- Redeploy a new container "test" (attached on the overlay network)
This is the scenario wherewe see sometimes these DNS problems (most of the times it works well, but sometimes the DNS resolution goes crazy)
The only way to fix this DNS problem is to remove the problematic node (the one which appears to have 2 Ips) from the Swarm and rejoin it again.
Hi All,
We continue facing this problem on 17.06.01-ce too.
It seems that the problem was not solved yet. Does someone has some news about ? I´m not sure but probably @fcrisciani are working on this issue.
@ggaugry I still have to go through the rename part, that can create issues.
@danielmrosa can you share something more about it? Do you have a set of steps that are consistently reproducing the problem?
@fcrisciani : thanks for the answer. I was saying earlier that remove/re-join the Swarm was a workaround but it actually doesn't work. Do you have any idea on how I can clean up the wrong DNS entries?
FYI:
root@e429d14e3de9:/# getent hosts esus-sfr0092vu
192.168.14.22 esus-sfr0092vu
192.168.14.21 esus-sfr0092vu
The 2 IPs shown are the 2 IPs actually used by the containers running on the target machine:
"Containers": {
"d84bf6f6c871ee19a0fc946b927bff0e5566553d5370cb0d677dbc1edd55da17": {
"Name": "bazoocaster-sfr0092vu",
"EndpointID": "e3a6b2ea98fd9729989ad096cd9eeb6162193f4afdcfef4c6d51c0195df90352",
"MacAddress": "02:42:c0:a8:0e:16",
"IPv4Address": "192.168.14.22/24",
"IPv6Address": ""
},
"d9fc0a52feea016e104140132062db27527de97ce8d3d5272d39ebc60ee61139": {
"Name": "esus-sfr0092vu",
"EndpointID": "74c674d46fa980f5945ea9731259561c6c170792d8b698d3c12db20480297cc0",
"MacAddress": "02:42:c0:a8:0e:15",
"IPv4Address": "192.168.14.21/24",
"IPv6Address": ""
}
},
@fcrisciani , Thanks for your answer.
Unfortunately, it´s not easy to reproduce the problem. The good news is that the last time we saw this problem was two weeks ago. We have approximately 44 tasks running on 3 workers that has managers role too.
When the problem occurs, one task resolve name to 2 IP´s. Even if we destroy the task related to the second IP, this IP does not leave DNS database quickly. I can´t figured out in what situation this problem occurs, sorry.
@fcrisciani : we removed the step to rename the container in _OLD to test. We still have the problem on a clean install.
@ggaugry in this output: https://github.com/moby/moby/issues/30487#issuecomment-326859139 you did the rename of the container and the DNS did not get updated correct? I can see the 2 names that are different. I'm trying to narrow down which are the set of the steps to reproduce easily to debug it.
@danielmrosa in your case is a permanent failure or is a transient one?
@fcrisciani : yes .
Step you could try to reproduce:
- Initiate a swarm cluster with 3 managers and a few workers and create an overlay network
- Create and start 3 basic containers attached to the overlay network on the 3 managers : "test_dns1", "test_dns2", "test_dns3" for exemple.
- Create and start a basic container "test" attached to the overlay network on one of the worker and start it
- Stop it, disconnect it of the overlay network and rename it as "test_OLD"
- Create a new container with same name "test" attached on the overlay network on the same worker and start it
Now, if you get into containers on managers node (test_dns1, test_dns2, test_dns3) and run command "getent hosts test", you will probably get 2 IPs.
@ggaugry ok thanks for the steps will try to take a look and will update here if I find something
hi,
just encountered the same problem with docker version 18.02:
pi@raspberrypi:~ $ docker version
Client:
Version: 18.02.0-ce
API version: 1.36
Go version: go1.9.3
Git commit: fc4de44
Built: Wed Feb 7 21:24:08 2018
OS/Arch: linux/arm
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.02.0-ce
API version: 1.36 (minimum version 1.12)
Go version: go1.9.3
Git commit: fc4de44
Built: Wed Feb 7 21:20:13 2018
OS/Arch: linux/arm
Experimental: false
pi@raspberrypi:~ $
I tried with both custom network and the default ingress one. All my dockers are on the same lan with no firewall between them and only one IPv4 and one IPv6 on each host. 2 dockers are running, one nginx and one nextcloud. even when they are on the same host they doesn't seems to see each other
Hi @fcrisciani,
I work with @danielmrosa and today we encountered a similar problem in our docker swarm cluster.
First, to answer your last question to Daniel, this isn't a permanent issue.
We noticed that 2 services on docker swarm cluster are using the same VIP.
Relevant pieces of docker inspect:
**docker inspect service1:**
"CreatedAt": "2018-03-08T19:30:23.511606854Z",
"UpdatedAt": "2018-03-08T19:30:23.515895534Z",
"VirtualIPs": [
{
"NetworkID": "kpflo6rdlj8bowfuzgd8ocib7",
"Addr": "10.32.40.32/22"
.........................................
**docker inspect service2:**
"CreatedAt": "2017-11-04T03:26:56.125013973Z",
"UpdatedAt": "2018-03-12T18:10:04.829164464Z",
"VirtualIPs": [
{
"NetworkID": "kpflo6rdlj8bowfuzgd8ocib7",
"Addr": "10.32.40.32/22"
.........................................
Docker version:
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:10:14 2017
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:46 2017
OS/Arch: linux/amd64
Experimental: false
We found this problem because we are trying to reach service2 at port 80, but we were reaching service1 instead.
We don't know yet how to reproduce this issue.
@fcrisciani any recommended action?
**Edited:** https://github.com/docker/swarmkit/pull/2474 we found this PR on 17.12.0-ce release notes, is this the same problem?
@wrg02
if you are deploying services through the API and you are not specifying the EdnpointSpec, then you can incur in such a scenario when the leader restart. The quick workaround for that is to always specify the EndpointSpec. The fix for that is: https://github.com/docker/swarmkit/pull/2505
We also recently found another issue in the IPAM that is fixed here: https://github.com/docker/libnetwork/pull/2105
Unfortunately with the current timeline not sure if the fix will make it for 18.03, but will definitely going to be included in the next release.
@fcrisciani
thanks for your help, we are now specifying EndpointSpec to avoid this problem.
We are happy to know that this will be fixed soon.
We will post here if we find another related issue.
Hi @fcrisciani
Even if we declare the **endpoint_mode: vip** and using the last stable version **18.03.0-ce** we are facing again the problem about duplicate IP address on DNS database. We have a cluster with 3 managers and 7 workers. We removed two nodes for maintenance ( availability drain). After, the cluster become unstable (high load), probably because the tasks was trying to move to another nodes. After that, we got many 502 bad gateways errors on one of our service. After a further investigation, we saw a duplicate IP address on DNS Database again.
This problem are blocking us to move on production using swarm mode. Do you have some recomendation ?
To fix itnow, we are trying use the tool : [https://github.com/docker/libnetwork/blob/master/cmd/diagnostic/README.md]
Thanks in advance!
+1
I'm also having this issue when using docker swarm mode.
```
docker --version
Docker version 18.03.0-ce, build 0520e24
```
Service records (docker DNS) sometimes end up with old IP addresses from the previous service.
@viniciusramosdefaria @2416ryan please guys let's not create another issue where we just post information that are not useful for debug purpose. If you have steps of hints on how to reproduce the condition please share them
@danielmrosa do you have any way to reproduce?
First check that I would do is to verify that the container that was associated to the extra IP is actually exited properly, I was reading that 18.03 was having an issue where containers where remaining stuck, so maybe the cleanup did not happen yet because of that bug.
First thing to check is the network inspect
0) do a docker network inspect -v |network id| on a node that has a container for that network. That will show the endpoint ID of the endpoint with old IP.
If you have the daemon in debug mode you can grep for it and see if there was an error on the cleanup
If that is not the case I would start taking a look to the network db state:
I will suggest the following greps on the daemon logs:
1) outgoing queue length: `grep "stats.*|network id|`
you will see a bunch of lines like:
```
Apr 03 10:46:38 ip-172-31-22-5 dockerd[1151]: time="2018-04-03T10:46:38.902639904Z" level=info msg="NetworkDB stats |hostname|(|node id|) - netID:3r6rqkvee3l4c7dx3c9fmf2a8 leaving:false netPeers:3 entries:12 Queue qLen:0 netMsg/s:0"
```
netPeers should match the number of nodes that have container on that network, entries is the number of entries in the database (it's not 1:1 with the containers), qLen should be always be 0 when the system is stable and will spike only when there is changes in the cluster.
2) grep `healthscore` this will show up only if nodes have connectivity issues, the higher the number the worse is the issue
3) grep `change state` this can identify the change of state of networkdb nodes, maybe there is some nodes that are not stable in the cluster.
If you use the diagnostic tool, you can also identify who was the node owner of the extra entry and track back with the last grep if the node left the cluster at some point and why the cleanup did not happen.
Let me know if you find a repro state or how is going the debug. If you want you can also share with me the logs of the nodes and I can help taking a look. I will need anyway the information mentioned above
Hi @fcrisciani
Sorry to not have a way to reproduce this problem. Is there a way to use consul as a DNS discovery on swarm mode?
The idea is to bypass the internal DNS service of swarm because is impossible to use, terribly unstable.
We are aborting our project to move on production using swarm mode.
At this moment, one task are responding many IP Addresses when I run the command getent hosts tasks.servicename , and just one tasking is running.
Please, tell me if there is a way to use consul as a DNS discovery to registrar container IP´s on consul KV store. AFAIK, it seems that is not possible on swarm mode, but may be I´m wrong.
Thanks in advance,
@danielmrosa happy to help if you can share more info, as I was telling you before, we are not seeing users reporting the instability that you are experiencing so it may be something in your environment. Check that the TCP/UDP 7946 ports for networkDB are open on your nodes and you can try with a brand new network to start from a 100% clean state eventually.
If you can share some engine logs and run `https://github.com/docker/libnetwork/blob/master/support.sh` indicating which are the extra IP that you are seeing I can take a quick look.
For what concerns consul, you can but you will have to handle it as a separate container on your side, there is no automatic integration to decide the backend.空Version: 2.3.3.2 (46784)
Channel: edge
Sha1: a03f51183ae5c3a98a5c8f2bb48f4a881e803158
Started on: 2020/07/27 16:18:06.396
Resources: C:\Program Files\Docker\Docker\resources
OS: Windows 10 Pro
Edition: Professional
Id: 2004
Build: 20175
BuildLabName: 20175.1000.amd64fre.rs_prerelease.200717-1349
File: C:\Users\patate\AppData\Local\Docker\log.txt
CommandLine: "C:\Program Files\Docker\Docker\Docker Desktop.exe"
You can send feedback, including this log file, at https://github.com/docker/for-win/issues
[16:18:06.507][GUI ][Info ] Starting...
[16:18:06.651][ComponentVersions ][Info ] Edition community
[16:18:06.657][ComponentVersions ][Info ] Edition community
[16:18:08.468][AppMigrator ][Info ] Current version: 6. Latest version: 6
[16:18:08.547][TrackingSettings ][Info ] Crash report and usage statistics are enabled
[16:18:08.550][SegmentApi ][Info ] Usage statistic: Identify
[16:18:08.857][SegmentApi ][Info ] Usage statistic: appLaunched
[16:18:09.408][ApplicationTemplatesTracking][Info ] Cannot list templates
[16:18:09.409][SegmentApi ][Info ] Usage statistic: eventTemplatesInfo
[16:18:09.410][SegmentApi ][Info ] Usage statistic: heartbeat
[16:18:10.025][LoggingMessageHandler][Info ] [8cb506e1] |BackendAPIClient start| GET http://backend/version
[16:18:10.074][LoggingMessageHandler][Info ] [8cb506e1] |BackendAPIClient end| GET http://backend/version -| 200 OK (took47ms)
[16:18:10.220][LoggingMessageHandler][Info ] [e3b38d7e] |BackendAPIClient start| GET http://backend/hyperv/vhdx-size?path=C:%5CProgramData%5CDockerDesktop%5Cvm-data%5CDockerDesktop.vhdx[16:18:10.231][LoggingMessageHandler][Info ] [e3b38d7e] |BackendAPIClient end| GET http://backend/hyperv/vhdx-size?path=C:%5CProgramData%5CDockerDesktop%5Cvm-data%5CDockerDesktop.vhdx -| 200 OK (took 10ms)
[16:18:10.237][LoggingMessageHandler][Info ][fe341b57] |BackendAPIClientstart| POST http://backend/migrate/app
[16:18:10.246][LoggingMessageHandler][Info ] [fe341b57] |BackendAPIClient end| POST http://backend/migrate/app -| 204 NoContent (took 8ms)
[16:18:10.251][Engines ][Debug ] Starting
[16:18:10.287][LoggingMessageHandler][Info ] [7c6113bb] |BackendAPIClient start| POST http://backend/versionpack/enable
[16:18:10.333][LoggingMessageHandler][Info ] [7c6113bb] |BackendAPIClient end| POST http://backend/versionpack/enable -| 204 NoContent (took 45ms)
[16:18:10.435][LoggingMessageHandler][Info ] [e1928b28] |BackendAPIClient start| POST http://backend/cloudcli/toggle
[16:18:10.447][LoggingMessageHandler][Info ] [e1928b28] |BackendAPIClient end| POST http://backend/cloudcli/toggle -| 204 NoContent (took 11ms)
[16:18:10.471][GoBackendProcess ][Info ]Starting C:\Program Files\Docker\Docker\resources\com.docker.backend.exe -addr unix:\\.\pipe\dockerBackendApiServer
[16:18:10.477][GoBackendProcess ][Info ] Started
[16:18:10.486][EngineStateMachine][Debug ] sending state Docker.ApiServices.StateMachines.StartTransition to state change sink
[16:18:10.486][EngineStateMachine][Debug ] State Docker.ApiServices.StateMachines.StartTransition sent to state change sink
[16:18:10.492][EngineStateListener][Debug ] received state Docker.ApiServices.StateMachines.StartTransition from LinuxWSL2
[16:18:10.498][EngineStateNotificationRecorder][Debug ] Registered state {"State":"starting","Mode":"linux","date":1595881090}
[16:18:10.499][SystrayNotifications][Info ] Docker is starting
[16:18:10.502][LinuxWSL2Engine ][Info ] Terminating lingering processes and wsl distros and patching host file
[16:18:10.507][LoggingMessageHandler][Info ] [331ccf7c] |BackendAPIClient start| POST http://backend/dns/refresh-hosts
[16:18:10.701][WSL2Provisioning ][Info ] Checking docker-desktop
[16:18:10.706][WSL2Provisioning ][Info ] deploying WSL distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro
[16:18:10.710][LoggingMessageHandler][Info ] [331ccf7c] |BackendAPIClient end| POST http://backend/dns/refresh-hosts -| 204 NoContent (took 203ms)
[16:18:11.531][LoggingMessageHandler][Info ] [efd89acd] |Server start| GET http://unix/versions
[16:18:11.670][LoggingMessageHandler][Info ] [efd89acd] |Server end| GET http://unix/versions -| 200 OK (took 139ms)
[16:18:11.726][GoBackendProcess ][Info ] ⇨ http server started on \\.\pipe\dockerVpnKitControl
[16:18:11.745][GoBackendProcess ][Info ] ⇨ http server started on \\.\pipe\docker_cli
[16:18:11.745][GoBackendProcess ][Info ] ⇨ http server started on \\.\pipe\dockerBackendApiServerForGuest
[16:18:11.745][GoBackendProcess ][Info ] ⇨ http server started on \\.\pipe\dockerBackendApiServer
[16:18:11.772][GoBackendProcess ][Info ] started port-forwarding control server on \\\\.\\pipe\\dockerVpnKitControl
[16:18:11.772][GoBackendProcess ][Info ] listening on unix:\\\\.\\pipe\\dockerVpnkitData for data connection
[16:18:11.772][GoBackendProcess ][Info ] enabling filesystem caching
[16:18:11.772][GoBackendProcess ][Info ] filesystem exports are: (2)
[16:18:11.772][GoBackendProcess ][Info ] volume control server listening on \\\\.\\pipe\\dockerVolume
[16:18:11.772][GoBackendProcess ][Info ] filesystem server listening on 00000000-0000-0000-0000-000000000000:00001003-facb-11e6-bd58-64006a7986d3
[16:18:11.772][GoBackendProcess ][Info ] file ownership will be determined by the calling user (\"fake owner\" mode)
[16:18:11.772][GoBackendProcess ][Info ] using mfsymlinks
[16:18:13.547][Updater ][Info ] Check for update process exited with 4294967295
[16:19:10.976][WSL2Provisioning ][Error ] Failed to deploy distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Le d鬡i de lop鲡tion a expir鬠car aucune r鰯nse na 鴩 re絥 de lordinateur virtuel ou du conteneur.
stderr:
[16:19:10.981][LinuxWSL2Engine ][Info ] Stopping windows side processes
[16:19:11.080][LinuxWSL2Engine ][Info ] Stopping engine
[16:19:11.164][LoggingMessageHandler][Info ] [48871f28] |BackendAPIClient start| POST http://backend/windowsfeatures/check
[16:19:11.818][LoggingMessageHandler][Info ] [48871f28] |BackendAPIClient end| POST http://backend/windowsfeatures/check -| 200 OK (took 654ms)
[16:19:12.404][LinuxWSL2Engine ][Info ] Terminating lingering processes and wsl distros and patching host file
[16:19:12.404][LoggingMessageHandler][Info ] [1fe0dd5b] |BackendAPIClient start| POST http://backend/dns/refresh-hosts
[16:19:12.567][WSL2Provisioning ][Info ] Checking docker-desktop
[16:19:12.567][WSL2Provisioning ][Info ] deploying WSL distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro
[16:19:12.568][LoggingMessageHandler][Info ] [1fe0dd5b] |BackendAPIClient end| POST http://backend/dns/refresh-hosts -| 204 NoContent (took 164ms)
[16:20:12.852][WSL2Provisioning ][Error ] Failed to deploy distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Le d鬡i de lop鲡tion a expir鬠car aucune r鰯nse na 鴩 re絥 de lordinateur virtuel ou du conteneur.
stderr:
[16:20:12.853][LinuxWSL2Engine ][Info ] Stopping windows side processes
[16:20:12.955][LinuxWSL2Engine ][Info ] Stopping engine
[16:20:13.030][EngineStateMachine][Debug ] sending state Docker.ApiServices.StateMachines.FailedToStartState to state change sink
[16:20:13.030][EngineStateMachine][Debug ] State Docker.ApiServices.StateMachines.FailedToStartState sent to state change sink
[16:20:13.030][EngineStateListener][Debug ] received state Docker.ApiServices.StateMachines.FailedToStartState from LinuxWSL2
[16:20:13.031][EngineStateNotificationRecorder][Debug ] Registered state {"State":"failed to start","Mode":"linux","date":1595881213}
[16:20:13.038][SystrayNotifications][Error ] System.InvalidOperationException: Failed to deploy distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Le d鬡i de lop鲡tion a expir鬠car aucune r鰯nse na 鴩 re絥 de lordinateur virtuel ou du conteneur.
stderr:
ࠄocker.ApiServices.WSL2.WslShortLivedCommandResult.LogAndThrowIfUnexpectedExitCode(String prefix, ILogger log, Int32 expectedExitCode) dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\WSL2\WslCommand.cs:ligne 142
ࠄocker.Engines.WSL2.WSL2Provisioning.|DeployDistroAsync|d__17.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:ligne 169
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.Engines.WSL2.WSL2Provisioning.|ProvisionAsync|d__8.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:ligne 78
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.Engines.WSL2.LinuxWSL2Engine.|DoStartAsync|d__25.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\LinuxWSL2Engine.cs:ligne 99
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.ApiServices.StateMachines.TaskExtensions.|WrapAsyncInCancellationException|d__0.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\TaskExtensions.cs:ligne 29
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.ApiServices.StateMachines.StartTransition.|DoRunAsync|d__5.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:ligne 67
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠄocker.ApiServices.StateMachines.StartTransition.|DoRunAsync|d__5.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:ligne 92
[16:20:13.056][GUI ][Info ] Sending Bugsnag report 5ef4cbf1-da7d-4749-857f-8620ad06fcd4...
[16:20:13.203][GUI ][Info ] Bugsnag report 5ef4cbf1-da7d-4749-857f-8620ad06fcd4 sent
[16:20:13.204][SegmentApi ][Info ] Usage statistic: eventCrash
[16:20:13.208][Diagnostics ][Warning] Starting to gather diagnostics as User : 'C:\Program Files\Docker\Docker\resources\com.docker.diagnose.exe' gather.
[16:20:13.361][GoBackendProcess ][Info ] external: GET /events 200 \"Go-http-client/1.1\" \"\
[16:20:13.362][GoBackendProcess ][Info ] external: GET /forwards/list 200 \"Go-http-client/1.1\" \"\
[16:20:14.616][Engines ][Error ] Start failed with Failed to deploy distro docker-desktop to C:\Users\patate\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Le d鬡i de lop鲡tion a expir鬠car aucune r鰯nse na 鴩 re絥 de lordinateur virtuel ou du conteneur.
stderr:
ࠄocker.ApiServices.WSL2.WslShortLivedCommandResult.LogAndThrowIfUnexpectedExitCode(String prefix, ILogger log, Int32 expectedExitCode) dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\WSL2\WslCommand.cs:ligne 142
ࠄocker.Engines.WSL2.WSL2Provisioning.|DeployDistroAsync|d__17.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:ligne 169
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.Engines.WSL2.WSL2Provisioning.|ProvisionAsync|d__8.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:ligne 78
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveauduquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.Engines.WSL2.LinuxWSL2Engine.|DoStartAsync|d__25.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\LinuxWSL2Engine.cs:ligne 99
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.ApiServices.StateMachines.TaskExtensions.|WrapAsyncInCancellationException|d__0.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\TaskExtensions.cs:ligne 29
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.ApiServices.StateMachines.StartTransition.|DoRunAsync|d__5.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:ligne 67
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠄocker.ApiServices.StateMachines.StartTransition.|DoRunAsync|d__5.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:ligne 92
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.ApiServices.StateMachines.EngineStateMachine.|StartAsync|d__14.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\EngineStateMachine.cs:ligne 69
--- Fin de la trace de la pile ࠰artir de l'emplacement pr飩dent au niveau duquel l'exception a 鴩 lev饠---
ࠓystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
ࠓystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
ࠄocker.Engines.Engines.|StartAsync|d__23.MoveNext() dans C:\workspaces\edge-2.3.3.2\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\Engines.cs:ligne 108)
[16:23:28.974][Diagnostics ][Info ] Uploading diagnostics DBEA6031-040B-4FB3-87CA-41EBCC5B2321/20200727202013
[16:23:30.187][Diagnostics ][Info ] Uploaded succesfully diagnostics DBEA6031-040B-4FB3-87CA-41EBCC5B2321/20200727202013
[16:24:12.181][ErrorReportWindow ][Info ] Open logs
+**Same issue**
Probably a dupe of #7808.
This WSL2 bug https://github.com/microsoft/WSL/issues/5648 doeseffect Docker, it is not an issue with Docker. We are hopeful for a fix soon. Right now Windows 20175 and 20180 have this issue. You can disable the WSL2 backend in your Docker settings for now. You won't find docker in your WSL2 path anymore but you can work around this temporarily with:
```
alias docker=docker.exe
```
Docker on the Windows side (powershell, cmd, etc) will behave normally.
Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked